report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
Since the 1940s, the U.S. government has assisted private voluntary organizations’ (PVO) overseas activities. After World War II, as PVOs responded to emergency needs in Europe, the U.S. government began donating excess property and supplies and financing shipping costs to assist PVOs’ efforts. The Congress authorized donations of commodities in 1954. Public Law (P.L.) 480, as amended, authorized commodity donations to voluntary agencies for distribution overseas to meet emergency and nonemergency food needs. Although still heavily involved in the provision of emergency assistance overseas, since the mid-1960s PVOs have gradually shifted their emphasis from charitable relief to development activities. The PVO community is comprised of diverse organizations from the traditional voluntary relief and development agencies to family planning organizations, labor institutes, and cooperatives. PVOs range from organizations with budgets of a few thousand dollars and narrow objectives, such as the Pan-American Association of Eye Banks, to large operations with worldwide programs and multimillion dollar budgets, such as the Cooperative for Assistance and Relief Everywhere, Inc. (CARE) and Catholic Relief Services. Literature on PVOs’ development activities describes some of the qualities that PVOs exhibit: familiarity with local populations and ability to work with the poor at the community level, innovation in approaches and flexibility in responding to development needs, lower cost compared to government-to-government aid programs, staff dedicated to the PVOs’ mission and willing to work under difficult conditions, long-term commitment to development, and ability to work with INGOs to strengthen local development capabilities. Additionally, development literature suggests that PVOs are generally weak in the areas of strategic planning, realistic planning for sustainability, and working with each other on common goals. Since the United States began providing foreign aid, its approach to development has changed several times. During the 1960s, the U.S. Agency for International Development (USAID) undertook large infrastructure projects such as dams and road construction. Then, in the early 1970s, USAID gave priority to addressing the basic human needs of the populations of developing countries. In the 1980s, USAID took a more macroeconomic approach to development, emphasizing economic growth through policy reform and a stronger private sector. None of the approaches proved to be the panacea for development problems. USAID’s current approach involves both macroeconomic reforms (legal, policy, and regulatory) and direct assistance to the poor in developing countries—to help them take advantage of economic and development opportunities. Thus, USAID has increasingly relied on PVOs to provide direct assistance while it focuses on macrolevel reforms through policy dialogue. In early 1995, USAID announced plans to increase the proportion of resources that it channels through nongovernmental organizations, including PVOs. Other recent proposals have advocated providing development assistance through a foundation that would distribute funds to PVOs and other nongovernmental organizations. Although its record of success has been mixed, USAID has access to developing countries’ governments and the technical expertise to assist them in such areas as policy analysis, sectoral reform, privatization, national programming, and structural adjustment. On the other hand, PVOs have demonstrated that they have a comparative advantage in providing direct assistance to meet varied development needs—often in areas underserved by governments. In 1993, the U.S. government provided about $1.7 billion of aid through PVOs, including $414 million in food commodities and freight. PVOs received $813 million from USAID in grants and contracts. Other U.S. government agencies provided another $439 million to PVO programs. For example, the Department of State contributes to PVOs for refugee assistance and the Department of Agriculture contributes surplus commodities for humanitarian assistance. PVOs and INGOs must register with USAID to receive grants for development assistance activities directly from USAID. As of October 1994, 419 PVOs were registered with USAID. To be registered, a PVO or INGO must, among other requirements, be a nonprofit and nongovernmental entity receiving funds from private voluntary in that it receives voluntary contributions of money, staff time, or in-kind support from the public; and engaged in or anticipating becoming engaged in voluntary charitable or development assistance operations overseas of a nonreligious nature, which are consistent with the purposes and objectives set forth in the Foreign Assistance Act and P.L. 480. USAID both supports PVOs’ independent activities and uses PVOs as intermediaries to carry out projects that USAID initiates in keeping with its own priorities. The Office of Private and Voluntary Cooperation, in the Bureau for Humanitarian Response, is the focal point for USAID work with PVOs, although other offices within USAID—including regional bureaus; the Bureau for Global Programs, Field Support and Research; the Office of Foreign Disaster Assistance, and the Office of Food for Peace—also work directly with PVOs. In countries where USAID maintains missions, PVOs can apply to the missions for funding for specific development projects in the host country. In addition to programs that are specifically restricted to registered PVOs, PVOs may also compete for other grants and contracts awarded by missions and USAID/Washington, D.C., bureaus. The objectives of our review were to examine (1) PVOs’ role in delivering USAID-funded foreign assistance; (2) potential issues and implications of increasing their role in delivering assistance, including accountability issues; (3) the success of their projects in achieving their objectives; and (4) the extent to which these organizations are dependent on U.S. government funding. We employed a combination of methods to address these issues, including (1) an extensive review of development literature to document the role PVOs play in the development spectrum (see selected bibliography), (2) discussions with U.S. and foreign government officials and PVO representatives, (3) case studies of selected projects in eight countries, (4) a collection of descriptive data on PVOs and their projects within each case study country, and (5) an analysis of financial data on PVO resources. For the case studies, we selected eight countries: Ecuador, Ghana, Honduras, Indonesia, Nepal, Niger, Romania, and Thailand. We selected these countries on the basis of the following criteria: (1) geographic balance, (2) size and diversity of PVO programs, and (3) whether PVOs used food aid in the country. We used a structured data collection instrument to collect basic descriptive data on PVO and INGO activities between 1991 and 1994. To review the success of PVOs in meeting their objectives and enhancing sustainable development, we conducted 26 case studies, including at least 2 projects in each country carried out by different PVOs in different development sectors. We used project design, implementation, and evaluation documentation; on-site observations of projects; and extensive interviews with USAID, PVO, and host government officials to assess projects as more or less successful relative to the projects’ success in meeting their objectives, including developing local capacity. To determine the degree to which projects met their objectives, we considered factors such as whether (1) projects were meeting agreed-upon measurable benchmarks or indicators within agreed costs and time frames and (2) outcomes achieved project goals. In many cases, indicators were not quantifiable, so we based our judgment on on-site observations of projects and interviews with USAID and PVO officials about intended project outcomes. We supplemented the fieldwork undertaken specifically to answer this request with information generated in the course of our other work in the last 3 years, including reports on P.L. 480 titles II and III and PVOs’ role in food aid. To assess the degree to which PVOs depend on federal funding, we examined data on private and federal funding published in Voluntary Foreign Aid Programs, an annual publication of USAID’s Bureau for Humanitarian Response. We analyzed the data from 1982 to 1992, the last year for which complete information was available, after converting dollar amounts into constant 1992 dollars. We did not independently verify the published information, although we worked with USAID to resolve apparent errors in the data. We performed our work from November 1993 through April 1995 in accordance with generally accepted government auditing standards. PVOs, as a group, work in many different sectors—from providing health services to pollution control to microenterprise development. They often work in remote areas where governments cannot or do not provide services. Some PVOs use U.S. volunteers to deliver technical services or assistance to developing countries. PVOs sponsor projects in many different sectors, including agriculture, education, environment, health and child survival, and small-enterprise development, designed to address the many needs of people in developing countries. Almost 30 percent of 274 USAID-funded PVO and INGO projects operating in the 8 countries in our review included health activities. Natural resources management, private sector development, and democracy were the next most frequently addressed issues—about 15 percent of projects addressed each of these issues. Other projects focused on labor, agriculture, and education, among other sectors. In several cases, PVO projects provided services in areas not served by the host government. The 26 projects we examined in detail represent the diverse areas of needs PVOs try to address. For example, one of the USAID-supported PVO projects addressed health and nutritional needs of children in Ghana. In Romania, several projects focused on the needs of institutionalized and orphaned children, while another PVO worked with state-owned enterprises to abate pollution. Projects in Nepal, Honduras, and Thailand sought to increase economic opportunities for women who traditionally have few opportunities for economic advancement—two by providing credit and technical assistance to microenterprises owned by women or employing women and one by providing scholarships to girls so they could continue their schooling. (See fig. 2.1 for a project supported by CARE in Thailand.) In Ghana, we examined a PVO agroforestry project. In Honduras and Indonesia, our sample included PVO projects to help communities to build water and sewer systems. PVO food aid projects we visited in Ghana, Honduras, and Indonesia either directly distributed food to beneficiaries or sold commodities to generate funds for development projects. Direct feeding projects included mother-child health projects that targeted malnourished children and pregnant or lactating women and school feeding projects in poor regions. Food-for-work projects are generally assumed to be self-targeting to the poorest because the work is generally difficult and the wages low. (See figs. 2.2 and 2.3 for food-for-work projects in Honduras and Ghana.) PVOs often conducted projects in remote areas not adequately served by the governments of developing countries. For example, in Ecuador, Catholic Relief Services and Project HOPE conducted child survival projects that provided immunizations and education on hygiene and nutrition to rural areas. (See fig. 2.4 for a child survival project in Ecuador.) In Niger, Africare provided training for community health workers in Diffa, an isolated area more than 900 kilometers from Niamey, the capital of Niger. Save the Children/Honduras and CARE in Indonesia were assisting in construction of water and sewer systems in remote areas. (See fig. 2.5 for a water system project in Honduras.) In Nepal, PVOs provide most medical services. USAID officials told us that PVOs fill critical voids in health and community development. About 15 percent of PVOs registered with USAID in 1993 used American volunteers in their overseas programs, according to information contained in USAID’s report on voluntary foreign assistance. Some PVOs coordinate volunteer service abroad to provide specialized services or technical assistance not available in developing countries, which, according to these PVOs, would be costly to provide through contractors. For example, health sector PVOs, such as Operation Smile International and Project ORBIS International, coordinate medical volunteers to provide medical care and train health workers. The Farmer-to-Farmer program in the former Soviet Union and worldwide included 8 PVOs and cooperatives and the Peace Corps that coordinated over 1,300 volunteer assignments to provide expertise on agricultural production and processes in over 60 developing countries worldwide and expected to field about 1,700 volunteers to the newly independent states of the former Soviet Union. The International Executive Service Corps and Volunteers in Overseas Cooperative Assistance recruit volunteers to provide consulting services to private sector businesses in developing countries. According to information supplied by the International Executive Service Corps, it delivered almost 75,000-person days of assistance in 1994 through its offices in 50 countries at an average cost of $439 per day. According to PVO representatives, volunteers are generally well received by the citizens of the developing country because they are viewed as experts who volunteer their time and are not perceived as having the political agendas sometimes associated with bilateral assistance or the profit motive of contractors. (See fig. 2.6 for a volunteer project in Romania.) However, the use of volunteers presents potential problems. For example, lack of language skills and cultural sensitivity on the part of volunteers and inability to adapt to living conditions in developing countries have limited the success of some volunteer experiences. Project evaluations and USAID and PVO officials noted that clear expectations on the parts of both the volunteers and the recipients of their services are critical to the success of the visit. They also stressed the importance of an in-country structure to (1) identify specific needs so that volunteers with appropriate skills can be found and (2) continue contact with recipients of the assistance to facilitate implementation of volunteers’ recommendations. While a few PVOs have begun to work with governments of developing countries on policy reforms, many believe they have a humanitarian mission and would prefer to focus on person-to-person aid rather than work with large institutions. PVOs have a comparative advantage in being able to work directly with the poor, or with organizations that represent the poor, than major donors can. Some PVOs prefer not to interact with host governments and, as outside entities, may not have access or leverage within a country’s government. In addition, many PVOs do not want to be seen as linked too closely to the U.S. government. Thus, providing economic assistance exclusively through nongovernmental organizations could limit the degree to which the United States can use such aid to achieve foreign policy interests other than supporting democratic development. In addition, channeling U.S. aid exclusively through PVOs also seems inconsistent with the current view of many U.S. government leaders that there should be a close link between the provision of U.S. assistance and specific U.S. foreign policy interests. Former foreign policy officials testified before the Senate Committee on Foreign Relations in March 1995 that “bilateral foreign assistance programs should be directly related to specific, identifiable U.S. foreign policy interests.” Currently, the Congress looks to USAID to ensure that U.S. assistance is used efficiently and effectively. In recent years, USAID has encouraged PVOs and INGOs to develop stronger financial management skills that would help ensure accountability for resources. Regulations requiring external audits, such as Office of Management and Budget Circulars A-110 and A-133, have also led PVOs to focus on improving financial management systems. USAID and InterAction believe the PVO community generally has taken seriously its responsibility to improve financial and program management.However, some PVOs and particularly INGOs still have difficulties in meeting U.S. accountability standards. For example, USAID’s Inspector General recently reviewed PVO activities in the West Bank/Gaza and found that while PVOs generally had the capability to implement USAID programs, two of the six needed to improve program monitoring, two needed to improve financial management, and four did not maintain adequate inventory records of USAID-funded commodities. Additionally, a recent audit of a PVO project in El Salvador discovered that funds had been misappropriated through false village banks and dummy loans. As of September 1995, $118,000 in USAID funds had not been recovered. The PVO reported that the USAID mission, the PVO, and the INGO have been working closely to address weaknesses that were exposed once the problem was discovered. Providing assistance funds directly to PVOs or through a foundation, as suggested in some of the reform proposals, would eliminate a key accountability mechanism from the U.S. foreign assistance program, and the Congress would have to accept more risk and less accountability for funds expended. We used criteria from development literature as the basis for our detailed assessment of 26 PVO projects: (1) progress toward meeting objectives and (2) building local capacity. While all projects experienced some unanticipated challenges in implementation, 20 of the 26 projects were making progress toward meeting all or most of their objectives. These projects resulted in accomplishments such as construction of water systems, improved provision of health care, and increased incomes for participants. Two projects were having major difficulties in attaining their objectives due to design or implementation problems. We were unable to assess the progress of four projects because their objectives and associated PVO or USAID evaluations were too general. We found no correlation between the size, geographic region, or sectoral emphasis of a PVO and its ability to achieve project objectives. In recent years, PVOs have begun working extensively with local groups that carry out projects, offering technical assistance and training to build institutional capacity designed to increase local capacity, rather than doing the projects with their own staffs. Most projects we reviewed included some activities designed to improve local capacity. (App. I contains the details of our 26 case studies.) The 20 projects in our case study that were making progress toward their objectives reflected a combination of the factors identified in development literature as being necessary for successful projects: good design and clear objectives, experience in the country and the development sector, qualified management and staff, and local participation. The following examples illustrate some of these factors: In Nepal, a $328,000 female education scholarship project sponsored by the Asia Foundation used a tested design and had local participation through its INGO partner, which had strong leadership that found creative solutions to problems the project encountered. As a result of the project, girls’ school attendance increased in every district where the project was implemented. USAID provided Katalysis $1.75 million to strengthen local INGOs in three countries, including Honduras. In Honduras, the INGO partner conducted projects aimed at increasing participants’ incomes. Katalysis provided technical assistance for the INGO in a wide range of areas such as long-range planning, information management systems, and fund raising. The PVO had good project design, which included local input and clear objectives, and had capable staff. The INGO ultimately designed and carried out a project that increased incomes of beneficiaries. In Ecuador, USAID provided $1.5 million to Project HOPE to develop a community health model with the goal of reducing sickness and death in children and women of childbearing age. The PVO had expertise in the sector and prior work in the country. The project had good management and design, and active community participation. The project was effective in increasing participation and extending health care coverage. (Fig. 3.1 shows a parade and banner advertising diarrhea prevention and treatment.) In Indonesia, USAID provided about $2.05 million in food aid to CARE to be sold to fund a pilot rural water and sanitation project. The project’s objectives were to increase access and use of water and sanitation facilities among villages in rural Indonesia and demonstrate that rural communities could develop and self-finance improved facilities. The PVO used proven technical approaches and the design included measurable objectives. Rather than working with an INGO, CARE employed local staff to work directly with the communities to plan and carry out the construction of water systems, including designing and building the appropriate system. (Fig. 3.2 shows the resulting water reservoir that is filled by gravity from a spring 400 meters away.) The communities agreed to take responsibility for sustaining the improvements. An outside evaluation of the program concluded that CARE’s approach was successful in creating sustainable water sanitation systems. Beneficiaries of the project in one village told us that the incidence of cholera had decreased since the system was built and that villagers could spend the 2 hours a day they had spent hauling water on more productive activities. The projects that were having the most difficulties suffered from poor design, inadequate project management, and lack of participation by the local community. The following describes some of the problems evident in the projects we examined: In Romania, USAID contributed $200,000 to a $1.02 million World Vision health care project to improve the delivery of primary health care services. The project was delayed almost a year due to internal management problems and difficulties in recruiting suitable staff. Further, the PVO met with difficulties in working with Ministry of Health officials because of changes in leadership there. A mid-term evaluation concluded that the achievements of the project at that date were mixed and could not always be clearly linked to project goals or to activities carried out. The final evaluation of the program, conducted after our fieldwork, noted that the conditions we observed had changed and the project achieved its objectives. The evaluation cited accomplishments in improving health knowledge, attitudes, and behaviors. In Niger, USAID provided Africare $1.8 million for a project to train community health workers in child survival techniques such as oral rehydration, growth monitoring, and nutrition. The project was delayed over 6 months due to difficulties in recruiting project personnel. The project design was flawed in that it was not integrated into the Ministry of Health’s program, so no local level officials took responsibility. Further, although Ministry of Health nurses were trained, the nurses refused to train village health workers unless they received additional pay to ensure their cooperation. When USAID and the PVO were unwilling to provide additional pay, project activities were slowed. Supervision of project personnel and monitoring of field activities were inadequate, and Peace Corps volunteers working with the project complained that the PVO did not provide them adequate guidance. There was little community participation in the village health program the project set up. Africare stated that the problems identified in the draft had been addressed and that the project is now an integral part of Ministry of Health activities. PVO projects are not immune to some of the traditional problems in development, including difficulties identifying and retaining qualified staff and lack of support from local and national governments, as the following examples show: In Ecuador, Catholic Relief Services had difficulties implementing its infant growth monitoring activities because the beneficiaries could not read and were unable to keep accurate records. In Romania, USAID provided Project Concern International $1 million to (1) train Romanians in obstetric and neonatal health care and (2) establish a model facility for institutionalized adolescents who can be assisted to function independently. The project successfully renovated a facility (see fig. 3.3) and trained staff for a transitional living center to teach handicapped adolescents independent living and job skills. However, the PVO encountered resistance from Romanian institutions that were reluctant to release adolescents into the private center. At the time of our visit, only 6 children lived at the center designed and staffed to accommodate 40 residents. Project Concern was working with the Romanian government and institutional officials to resolve such problems. One concern about development projects is their sustainability. Sustainability is often affected by the level of local participation in planning and carrying out project activities. USAID has encouraged PVOs to work closely in implementing projects with local counterpart organizations, including national and local governments and INGOs, to strengthen the in-country development capacity. Those projects that respond to the development priorities of the intended beneficiaries have been shown to have the best prospects for sustainability, according to development literature. Since strengthening local capacity is fundamental to a country’s long-term social and economic development, we examined the extent to which local persons and groups were involved in planning and carrying out project activities. Of the 241 projects in our inventory for which the information was available, 146 (61 percent) involved one or more INGOs. INGOs were project implementors in at least one-third of the projects. For example, Private Agencies Collaborating Together provided technical assistance to local organizations that worked directly with street children in Thailand. In Indonesia, the National Cooperative Business Association supported local cooperatives in export-oriented businesses in furniture and spices (see fig. 3.4). Efforts to involve INGOs in planning and carrying out projects were apparent in most of the 26 projects we reviewed in detail. Twenty-one projects involved at least one local governmental or nongovernmental organization in carrying out activities. Five projects focused specifically on strengthening INGOs, primarily by providing technical assistance and training to local organizations. Three projects focused on strengthening some aspect of the developing countries’ government service delivery mechanisms. For example, in Ecuador, Project HOPE worked with the Ministry of Health to train community health workers, and in Romania, World Vision worked with the Ministry of Health to improve primary health strategies and service delivery. In Honduras, CARE worked with the Ministry of Education on a school feeding program that included daily meals to nearly 298,000 poor children at 3,743 schools. Others worked directly with community groups, in some cases organizing residents for a particular purpose. Beneficiaries of assistance, including community groups, were more likely to be involved in implementing projects and adapting existing designs to local conditions than they were to be involved in the design process. One project we examined in Ghana demonstrates the need for local involvement in planning and designing projects. In this case, USAID provided the Adventist Development and Relief Agency about $459,000 in fiscal year 1993 in food commodities and cash grants to support a project to establish self-financing nurseries to grow and sell seedlings that villagers would plant for later harvest and sale. However, the project did not have local participation in design and did not take into account key environmental and economic factors, including lack of demand for seedlings. The project, according to an independent evaluation, was “conceptualized, was designed, and is managed by outsiders (both expatriate and Ghanaian) to funnel into villages a commodity (wood trees) that was and is low on the scale of locally perceived priorities.” While the project set up the nurseries and trained local staff paid with donated food, the lack of demand for seedlings made it unlikely that the nurseries could be self-sustaining. Further, the Peace Corps workers that had initially set up and managed the nurseries were supposed to turn management responsibilities over to the beneficiaries. However, no time period was set for a phase over of responsibilities and, according to an outside evaluator, there was no clearly defined withdrawal scenario in project documents. According to project evaluations, no nurseries had been turned over to local management 3 years after the project started. USAID and the PVO have informed us that the problems identified during our fieldwork have been addressed and that the project is showing positive results. The PVO hopes to turn management of the project over to local workers beginning in 1996. During our fieldwork, USAID officials in Washington and the field noted that some PVOs have been more successful than others in developing INGOs and turning over direct service activities to the local organizations. According to USAID officials, PVOs that have developed expertise in and networks for charitable service delivery in particular countries have tended to move less quickly toward working with INGOs than PVOs that see their role as enabling INGOs to serve their local communities. Despite their status as private, nongovernmental organizations, many PVOs receive significant amounts of federal funding. However, we found that PVOs generally are less dependent on government funding than they were a decade or more ago—although some individual PVO in-country projects are funded entirely by USAID. While federal spending on PVOs has increased in absolute terms since 1982, the percentage of total PVO resources coming from the federal government has decreased 13 percent (for PVOs that receive federal funds), from 42 percent in 1982 to 29 percent in 1992. This is because private donations have increased at a much faster rate than federal funding. PVOs must be registered with USAID to receive direct funding for purposes other than disaster assistance. In 1992, 231 registered PVOs received federal funding—an 83-percent increase from the 1982 total of 126. To qualify for development assistance funding, PVOs must show a minimum level of private funding (20 percent). This “privateness” calculation represents PVOs’ total resources and not their contributions to the costs of specific projects. Our analyses of data for PVOs that receive federal funding show that reliance on government funding declined for many federally-supported PVOs between 1982 and 1992. Total private funding for PVOs receiving federal funds grew from $1.3 billion in 1982 to $3.4 billion in 1992 (in constant 1992 dollars), a 160-percent increase. In contrast, federal funding for PVOs fluctuated over this period—dropping to a low of $0.9 billion in 1984 and peaking at $1.5 billion in 1992, a 41-percent increase from the 1982 level of $1.07 billion (see fig. 4.1). The median level of private funding for PVOs that received federal funding more than doubled, growing from $1.3 million in 1982 to $2.7 million in 1992, after peaking at $3.4 million in 1989. Appendix II shows the distribution of PVOs by levels of federal funding from 1982 to 1992, and appendix III shows PVOs’ federal funding as a share of total funding in 1982 and 1992. While federally supported PVOs received a median of 36 percent of their total support from federal sources in 1982, in 1992 they received 23 percent. The median amount of federal funding, in constant 1992 dollars, for PVOs that received any federal funding decreased 31 percent, from $929,487 to $639,136 after peaking at $1.5 million in 1986 (see fig. 4.2). This decline was partly due to the increase in the number of PVOs that received federal funding and the relatively smaller increase in federal funding for PVOs. A smaller percentage of PVOs depended on government funding for a substantial portion of their resources in 1992 than in 1982. In 1982, the 44 percent that received federal funding received at least half of their total funding from government sources; in 1992, only 24 percent did. Similarly, the proportion of PVOs that received 80 percent or more of their funding from the government declined from 22 percent to 10 percent (see fig. 4.3). However, some PVOs still received a large percentage of their resources from the U.S. government. For example, Catholic Relief Services and CARE have consistently received the largest amounts of federal support among PVOs, much of it in the form of food aid. Catholic Relief Services received 69 percent of its total revenues from the U.S. government in 1982 and 76 percent in 1992. Catholic Relief Services pointed out that if food aid is deducted from the 1992 figures, the percentage of U.S. government resources would be reduced from 76 percent to 38 percent. CARE also received significant U.S. support—60 percent of its 1992 revenues came from the U.S. government, although this is a decrease from 78 percent in 1982. Total resources for PVOs that received federal funding grew from a median of $3.6 million in 1982 to $5.2 million in 1992 (in constant 1992 dollars), peaking in 1986 at $7.3 million. In 1992, five PVOs had resources totaling over $200 million, and all of them received federal funding. Three of these PVOs were also the largest PVOs in 1982. The share of total federal funding going to the top 5 percent of federally funded PVOs decreased from about 71 percent in 1982 (when 6 PVOs received $762.4 million) to about 59 percent in 1992 (when 11 PVOs received $893.6 million). The 5 percent of PVOs that got the smallest amounts of federal funding received less than $12,800 each in 1982 and less than $10,850 in 1992, or 0.005 and 0.006 percent of federal funding in the respective years. In addition, 153 registered PVOs did not receive any federal funding in 1992, compared to 18 in 1982. The preceding data on PVOs’ total financial resources provides a view of decreasing financial dependence on the U.S. government, but it is also necessary to examine how PVOs work with USAID on specific projects to understand the issue of dependency. Although virtually all PVOs have some private resources, PVOs must make choices about how much of their private funding to devote to USAID projects and how much to spend on self-determined, self-supported activities. Until July 1994, USAID generally required PVOs to contribute at least 25 percent toward the costs of PVO projects supported through USAID grants. This cost-sharing requirement was meant to ensure that PVOs were committed to their USAID-funded projects and to enhance the likelihood that project activities and benefits would be sustained after USAID funding ends. The requirement was also seen as a means of mobilizing additional funding for projects and a mechanism to prevent PVO financial and programmatic dependence on USAID. However, PVO officials told us that cost sharing at the 25-percent level was often difficult on large dollar-value projects, especially for smaller PVOs. For example, a $2 million USAID project might require a $500,000 contribution from the grantee. In addition, because PVOs did not always want to use private resources to meet USAID’s priorities, USAID’s choice of PVO partners was sometimes limited. Because of these problems, USAID changed its policy to encourage, but not require, cost sharing for these grants. USAID’s new policy allows more flexibility in determining the cost-sharing level: it encourages the “largest reasonable and possible” level of cost sharing without specifying any minimum. This policy change makes USAID treatment of PVOs more consistent with its treatment of other grantees, such as universities and other nonprofit organizations, which are not required to make any minimum level of financial contribution to USAID-funded projects. USAID stated that the purpose of the revision of the cost-sharing policy was to standardize and streamline policy and process, not to eliminate USAID’s preference for PVOs’ 25-percent contributions to USAID activities. USAID stated it does not expect overall PVO contributions to USAID activities to lessen as a result of this policy.
Pursuant to a congressional request, GAO reviewed private voluntary organizations' (PVO) role in delivering federally funded foreign assistance, focusing on the: (1) implications of increasing PVO role in delivering assistance; (2) success of PVO projects in achieving their objectives; and (3) extent to which PVO are dependent on U.S. government funding. GAO found that: (1) the PVO community encompasses organizations of varying sizes, missions, geographic focuses, and capabilities, and they work to address varied development needs; (2) PVOs serve as a complement to traditional government-to-government assistance and can be a mechanism to strengthen indigenous community-level organizations; (3) while PVOs have demonstrated that they are generally effective in carrying out community-based development projects, most have not had wide experience in working with governments and institutions on sectoral and macroeconomic policy reforms necessary to create an environment favorable to development; (4) twenty of the 26 PVO projects GAO reviewed were making progress toward their objectives, and good project design, competent in-country staff, and local participation were factors common to the most successful projects; (5) PVOs are increasingly using local groups to carry out projects, which should increase the local capacity for development; (6) most projects GAO reviewed included local capacity building, which is critical to long-term development and sustainability; (7) accountability for Agency for International Development (AID) assistance funds has been a continuing concern, and over the last decade, AID has encouraged and assisted PVOs to improve their program and financial management systems; (8) providing increased amounts of foreign aid directly through PVOs or through a foundation, as suggested in some reform proposals, would remove a key accountability mechanism from the U.S. foreign assistance programs; (9) although some individual PVO projects may be funded entirely by AID, PVOs, as a group, have become less dependent on U.S. funding; (10) federal funding as a share of total funding for PVOs receiving federal support dropped from 42 percent to 29 percent between 1982 and 1992; and (11) U.S. funding for PVOs has increased, but private resources have increased faster.
Our work over the past several years has demonstrated that improper payments are a long-standing, widespread, and significant problem in the federal government. In December 2007, we reported on DOD’s fiscal year 2006 travel program improper payment estimates. We found that (1) the improper payment estimate was understated by at least $4 million, (2) several weaknesses in DOD’s sampling methodology did not result in statistically valid estimates of travel improper payments at the component level, and (3) limited guidance and oversight by the Office of the Comptroller contributed to the unreliable assessment of improper payments for the travel program. The DOD OIG also has issued reports for the past few years highlighting weaknesses in the department’s efforts to report on improper payment information. The DOD OIG reported that the department had not implemented guidance to address the use of valid statistical sampling in determining programs and activities susceptible to significant improper payments. In January 2008, it reported that DFAS had not conducted adequate research to determine if contractor refunds were improper and, in some cases, had not reported improper payments associated with these refunds. The DOD OIG continues to report that the department has not fully complied with the requirements of IPIA and OMB’s implementing guidance and does not have adequate controls to fully implement a recovery audit program. Guidance for reporting under IPIA and the Recovery Auditing Act is provided in Appendix C of OMB Circular No. A-123. IPIA requires agencies to perform four key steps in meeting the improper payment reporting requirements as shown in figure 1. OMB’s implementing guidance instructs agencies to carry out the four key steps under IPIA, with one exception. For the first step—perform a risk assessment—OMB guidance allows agency programs deemed not risk- susceptible to conduct a risk assessment generally every 3 years. Further, agencies need not conduct formal risk assessments for those programs in which improper payment baselines are already established, are in the process of being measured, or will be measured by an established date. However, OMB guidance does state that if a program experiences a significant change in legislation, a significant increase in funding level, or both, agencies are required to reassess the program’s risk susceptibility during the next annual cycle, even if it is less than 3 years from the last assessment. As we have previously testified before your Subcommittee this is inconsistent with the express terms of IPIA, which require that agencies annually review all of their programs and activities. OMB then requires that agencies estimate the gross total of both over- and underpayments for those programs and activities identified as susceptible. These estimates shall be based on a statistically random sample of sufficient size to yield an estimate with a 90 percent confidence interval of plus or minus 2.5 percentage points. If an agency cannot determine whether a payment was proper because of insufficient documentation, Appendix C to OMB Circular No. A-123 requires that the payment be considered improper. The guidance further requires that agencies develop corrective action plans that include a discussion of the causes of the improper payments identified, corrective actions taken for each different type or cause of error, and the results of actions taken to address those causes. In addition, OMB Circular No. A-136, Financial Reporting Requirements requires agencies to report, in table format, improper payment estimates and related outlay amounts for the prior year, current year, and the following 3 years. As part of this reporting, OMB encourages agencies to report underpayment and overpayment amounts, if available. The Recovery Auditing Act requires each executive branch agency that annually enters into contracts with a total value of $500 million or more to use recovery audits and recovery activities as part of a cost-effective recovery auditing program. The law authorizes federal agencies to retain recovered funds to cover actual administrative expenses as well as to pay other contractors, such as collection agencies. OMB guidance requires, among other things, that agencies include in their annual reporting a general description and evaluation of the steps taken to carry out a recovery auditing program, the total amount of contracts subject to review, the actual amount of contracts reviewed, the amounts identified for recovery, and the amounts actually recovered in a current year. Further, OMB Circular No. A-136 requires agencies to report cumulative amounts identified for recovery and amounts actually recovered as a part of their current year reporting. The responsibility for assessing and reporting DOD’s improper payments information rests with the Office of the Under Secretary of Defense (Comptroller) (Office of the Comptroller). The Accounting and Finance Policy Directorate within the Office of the Comptroller is responsible for carrying out the day-to-day activities involved in meeting IPIA requirements. To collect improper payment information including risk assessments, improper payment estimates, and corrective actions, the Accounting and Finance Policy Directorate sends out an improper payment survey (IPIA survey) to all DOD agencies and military services requesting improper payment information for the current fiscal year (see app. II for a list of the 33 agencies and military services). The agencies and services are required to submit improper payment estimates to the Accounting and Finance Policy Directorate for all DOD payment activities identified under IPIA. The Accounting and Finance Policy Directorate then aggregates and reports the improper payment information in DOD’s annual AFR. Since implementation of IPIA, DOD has reported improper payment estimates for the following payment activities for fiscal years 2004-2008 as shown in table 1 below. Similarly to improper payment reporting, the Office of the Comptroller is responsible for identifying and annually reporting recovery audit information in DOD’s AFR, while its Accounting and Finance Policy Directorate is responsible for carrying out the day-to-day activities. DOD’s recovery auditing process over contract and vendor payments (commercial payments) encompasses several organizations including DFAS offices and external contractors, which are discussed later in this report. These organizations are required to compile and submit the universe of commercial payments, commercial overpayments identified for recovery, and commercial payments actually recovered to the Accounting and Finance Policy Directorate. It in turn aggregates and reports the recovery audit information in DOD’s annual AFR. DOD’s reported recovery audit information for fiscal years 2004-2008 is shown in table 2 below. DOD’s processes to conduct risk assessments, estimate improper payments, and develop corrective actions to reduce improper payments for its fiscal year 2007 IPIA reporting had significant weaknesses. A lack of detailed guidance as well as inadequate monitoring and oversight of DOD’s improper payment activities also existed, raising doubts about the accuracy of the information reported. DOD’s risk assessment process was inadequate to ensure that appropriate consideration was given to the risks associated with its payment activities, thus not allowing management appropriate visibility of its vulnerabilities. DOD lacked detailed guidance on how to conduct a risk assessment, including identifying the universe of activities, determining if risks exist, identifying what those risks are, and evaluating the results, as required by our internal control standards. Recognizing that the internal guidance and documentation needed to be improved, in December 2008, DOD issued a new Financial Management Regulation (FMR) chapter—Volume 4, Chapter 14, Improper Payments—to expand existing guidance to address IPIA requirements, by clarifying the agencies’ and military services’ responsibilities for reporting improper payment information, broken down by payment activity. Although we did not determine the adequacy of these changes as the scope of our audit was fiscal year 2007, we noted that DOD did not require its agencies and military services to document their risk methodologies, including risk factors considered, the potential or actual impact on their program operations, and the rationale for assessing risk as either low, medium, or high. While nine DOD components conducted risk assessments for their six payment activities totaling about $493 billion in fiscal year 2007, we found an additional $322 billion in outlays reported in DOD’s Statement of Budgetary Resources (SBR) that had not been assessed although IPIA requires that agencies annually review all programs and activities (see fig. 2). According to Office of the Comptroller officials, the six payment activities it assessed covered all DOD outlays for fiscal year 2007 and the $322 billion difference in outlays represented IPIA reporting differences related to payroll payments for three of its six payment activities (net outlays reported for IPIA purposes versus gross outlays reported for SBR purposes), intragovernmental payments, and payments resulting from classified activities. While DOD officials stated that it reconciled the $322 billion difference to the SBR (with the exception of classified activities), these officials did not provide us with this reconciliation to enable us to independently substantiate this difference. Further, these officials could not reconcile the $493 billion in outlays for the six payment activities to an alternative source, such as the SBR. Based on this comparison, DOD had not reviewed all of its programs and activities. Office of the Comptroller officials told us that DOD agencies and the military services were required to reconcile their payment activities with their budget data for fiscal year 2008 to ensure that all payment activities had been accounted for at the component level. In addition, DOD did not have sufficient documentation to support the level of assessed risk for the six payment activities it did evaluate as required by OMB guidance and our internal control standards. For example, none of the nine components that conducted risk assessments described their methodology or rationale for the level of risk assigned to each applicable payment activity. For the six risk assessments conducted, DOD had determined the risk of having significant improper payments was low, based on OMB criteria. However, given the lack of supporting documentation and evidence for the risk assessments and DOD’s history of long-standing weaknesses, including GAO’s designation of eight individual DOD areas as high risk, the low risk levels are not based on sufficient analysis and are likely unrealistic and not reflective of the wide range of vulnerabilities that exist within DOD. Office of the Comptroller officials told us that the department calculates improper payment estimates for the majority of the payment activities under IPIA, regardless of risk level assessed in determination of susceptibility to significant improper payments, because of the large volume and high dollar amounts of the transactions. DOD did not rely on the results of the risk assessments to determine whether to address the remaining IPIA requirements. Although DOD did not rely on its risk assessments, the implementation of IPIA requires agencies to make decisions as to how to proceed based on the completion of risk assessments, which is the first step. Therefore, DOD’s failure to conduct adequate risk assessments could negatively impact its ability to gain the information it needs to make decisions as it proceeds through the remaining steps to ensure proper implementation of IPIA requirements. As we previously reported, the information developed during a risk assessment forms the foundation or basis upon which management can determine the nature and type of corrective actions needed. It also gives management baseline information for measuring progress in reducing improper payments. Until the department recognizes the importance of performing comprehensive risk assessments, the reported information will not provide meaningful results or adequately depict DOD’s risk of improper payments, thus not providing the level of transparency envisioned by IPIA. DOD had neither established a methodology to estimate nor had it estimated the amount of improper payments for commercial pay—its largest payment activity with total outlays of $340.3 billion (see fig. 3). While DOD, in general, developed statistically valid sampling methodologies and estimated improper payment amounts for its remaining five payment activities, collectively the proportion of these five payment activities to DOD’s reported payment population subject to IPIA was about one-third of the total. See appendix III for a description of the sampling plans for DOD’s five payment activities. OMB guidance requires that for any programs and activities identified as susceptible to significant improper payments, agencies must develop a statistically valid methodology to estimate the annual amount of improper payments, including a gross total of both under- and overpayments. Although DOD assessed all six payment activities to be at low risk for improper payments, it chose to develop improper payment estimates for five of the six payment activities based on the large volume or high dollar amounts of the transactions. However, DOD did not estimate improper payments for commercial pay despite the large volume and high dollar amounts of the transactions. According to DOD officials, the department decided not to establish a statistically valid methodology or calculate an estimate for commercial improper payments under IPIA because (1) its past attempts to estimate commercial improper payments had resulted in improper payment estimates that were lower than the actual amount of overpayments identified, and (2) it would create duplicate reporting of improper commercial payments as this type of information was captured as part of DOD’s efforts to address Recovery Auditing Act requirements, which DOD officials believed resulted in a better measurement because it represented actual overpayments. However, in fiscal year 2006, DOD estimated $550 million in improper payments, which was nearly 30 percent higher than the $426 million of actual under- and overpayment amounts reported to address Recovery Auditing Act requirements. Regarding DOD’s point that reporting commercial improper payments under IPIA and the Recovery Auditing Act would create duplicate reporting, we disagree. DOD could leverage the results from its existing Recovery Auditing Act processes used to identify actual commercial under- and overpayments to develop its statistical sampling methodology to enhance the reported estimate. This approach is similar to DOD’s existing statistical sampling methodologies, which also include actual amounts for calculating improper payment estimates of civilian and military pay. As we previously reported, the scope of review under IPIA differs from that of the Recovery Auditing Act. Specifically, the scope of review under the Recovery Auditing Act targets agency-identified contract overpayments, whereas the scope of review under IPIA targets both under- and overpayments, including agency- and contractor-identified improper payments. Further, while OMB guidance allows agencies to exclude certain classes of contracts from their recovery auditing reporting, no such exclusions exist for IPIA. Establishing a well-designed statistical sampling methodology to estimate DOD’s improper commercial payments would not only facilitate compliance with IPIA requirements, but also help address a current data void on the extent of improper payments made to contractors and vendors. For example, based on our review of DOD’s fiscal year 2007 data of commercial payment errors, we identified $62 million in commercial improper payments and another $92 million of potential improper payments, which were not identified by DOD’s current Recovery Auditing Act processes. DCAA and the DOD OIG also identified payment errors not captured by DOD. For example, in August 2007, DCAA reported that a contractor had overbilled—and DOD had overpaid—award fees totaling about $267 million. Because DOD had not established a methodology to estimate improper payments for its commercial payment activity, these and other types of payment errors that meet the definition of improper payments were not reported, and thus, lacked the level of transparency and accountability called for under IPIA. Further, without an across-the-board, systematic estimate of the extent of improper commercial payments, DOD management could not determine (1) if improper commercial payments were significant enough to require corrective actions, (2) how much investment in new internal controls would be cost-justified, or (3) the effectiveness of any prior corrective actions. Although DOD reported the corrective actions taken or planned to reduce improper payments for its five payment activities that met its reporting threshold, the corrective actions for three of the five payment activities— military pay, civilian pay, and travel pay—generally did not address the root causes of the improper payments. For travel pay, we found that with the exception of one agency component, root causes had not been reported in DOD’s AFR, even though a description of the corrective actions taken had been disclosed. OMB guidance requires that, for all programs and activities with estimated improper payments exceeding $10 million, agencies must report on the root causes of the improper payments identified, actions taken to prevent or reduce those root causes, and the results of actions taken. For example, DOD reported that inaccurate or untimely reporting of entitlement data on such areas as time and attendance, personnel actions, and pay allowances was the primary cause for the improper payments for military and civilian pay. As actions to address these causes, DOD reported that it had developed performance metrics and goals to track the timeliness and accuracy of payments and that senior leadership had participated in quarterly meetings to discuss problem areas and find solutions to mitigate the risk of improper payments. While these actions measured entitlement performance, focused attention on the effectiveness of existing processes, and facilitated the sharing of information, it was unclear how these specific actions would address the root causes that led to inaccurate or untimely reporting and whether those actions would reduce improper payments. Conversely, for travel pay, we found that except for the U.S. Army Corps of Engineers, DOD agencies and military services did not report the associated root causes contributing to improper travel payments, even though corrective actions were disclosed in the AFR. The U.S. Army Corps of Engineers reported that the primary causes of improper travel payments included traveler input errors and inadequate supervisory review of travel vouchers. The Office of the Comptroller told us that the root causes and corrective actions implemented or underway were not fully disclosed in the AFR due to report formatting constraints, preventing the inclusion of all detailed information. However, when we reviewed the underlying support, we found that this documentation also lacked details as to (1) the corrective actions taken or planned and generally mirrored the corrective actions reported in the DOD’s AFR, (2) the root causes for improper travel payments, and (3) the results, if any, of the corrective actions taken. Accurately characterizing and publicly reporting the root causes and associated corrective actions to reduce improper payments enables agencies and others with oversight and monitoring responsibilities to measure progress over time and determine whether further action is needed to minimize future improper payments, thus enhancing accountability over the reduction of improper payments by ensuring that effective corrective actions are taken. The Office of the Comptroller’s monitoring and oversight of DOD’s improper payment activities were inadequate because they did not include verifying the accuracy and completeness of the information reported in DOD’s AFR as required by DOD guidance. Specifically, the Office of the Comptroller issued a memorandum in November 2006 that required the Project Officer for Improper Payments and Recovery Auditing to, among other things, verify that DOD’s reported information was accurate, complete, and meets or exceeds the minimum OMB reporting requirements. In addition, our internal control standards for monitoring provide that processes should generally be designed to ensure that ongoing monitoring occurs in the course of normal operations and include regular management and supervisory activities, comparisons, and reconciliations. Further, our internal control standards provide that controls should include a wide range of diverse activities including verification of information and be aimed at validating the propriety and integrity of both organizational and individual performance measures and indicators. During our review and analysis of DOD agencies’ and services’ IPIA survey responses, we found that the project officer had not conducted adequate follow-up to ensure that (1) the information provided was accurate and complete and sufficient to support risk assessment conclusions, and (2) reported corrective actions planned or underway addressed the root causes of the improper payments. For example, DOD agencies and military services did not provide supporting documentation for their risk assessment methodologies and conclusions, including the risk factors considered as part of this assessment and how they arrived at the final determination of risk for applicable payment activities. Yet we found no evidence that the Office of the Comptroller conducted appropriate follow- up as part of its oversight and monitoring responsibilities to ensure payment activities had been consistently assessed and provided some level of comparability among DOD agencies and military services. We previously reported on similar instances of lack of oversight and review by the Office of the Comptroller over IPIA reporting for DOD’s travel payment activity. In that report, we found that the IPIA survey excluded about $5.1 billion in the universe of travel payments for fiscal year 2006 and that only $824 million of the total travel payments had been reported in DOD’s annual report for the same period. We noted that these discrepancies would have been brought to management’s attention in a timely manner if monitoring activities, such as periodic reconciliations and comparisons, had been performed. Office of the Comptroller officials told us that the DOD agencies and the military services performed verification reviews prior to submission of their improper payment information, providing assurances that the reported information was accurate and complete. As a result, they did not believe it was necessary for the project officer to independently validate this information despite the requirement in the November 2006 memorandum to do so. However, based on our findings discussed earlier in this report, the oversight and monitoring activities performed by the agencies and services, as well as the Office of the Comptroller, were inadequate. Without adequate monitoring and oversight, DOD is at risk of inaccurately reporting the extent of its improper payments, not taking the steps needed to reduce improper payments, and ultimately not meeting IPIA requirements. DOD’s recovery audit program was inadequate because it leveraged existing processes that were not specifically designed to identify and recover overpayments as stipulated in the Recovery Auditing Act. Further, DOD’s internal guidance lacked detailed instructions to effectively address recovery auditing requirements. We also found that DOD’s reported recovery audit information for fiscal year 2007 was unreliable, as the reported amounts were incomplete and not fully supported. In addition, we determined that DOD’s monitoring and oversight activities were inadequate to ensure the accuracy and completeness of the reported recovery audit information. The majority of DOD’s processes used to identify and recover commercial (contractor and vendor) overpayments were inadequate because they were not specifically designed to do so as required by OMB guidance (see table 3 for these processes). Specifically, only DFAS’s Internal Review and DOD’s two external recovery audits were specifically designed to identify and recover commercial overpayments. We also found that DFAS suspended its Internal Review postpayment audit of contract payments— its largest payment activity—for fiscal year 2007, but did not disclose this limitation in its fiscal year 2007 AFR. DFAS officials told us that its Internal Review contract postpayment audits were suspended and that there was a reallocation of staff resources to support base realignment and closure (BRAC) initiatives, specifically auditing the records of affected DFAS sites that processed commercial payments from 2006 through 2008. According to DFAS officials, to compensate for this suspension, DOD relied on existing prepayment controls to identify contract overpayments, such as daily manual reviews of a random sample of invoices with thresholds of $500,000 or more. In January 2009, Internal Review officials informed us that the office had reinstituted efforts to conduct audits of contract payments. Internal Review’s current audit covers the “catch up” period of payments made between April 2006 and March 2008, and the audit results are expected by the end of fiscal year 2009. According to DOD officials, the existing processes were adequately designed to fulfill the requirements of the Recovery Auditing Act and OMB guidance and thus, no further actions were needed. However, we found the majority of the existing processes were not specifically designed to identify overpayments. For example, DFAS’s contract reconciliations were performed upon request to resolve previously identified discrepancies, including possible overpayments, within DOD’s contract, disbursement, and accounting records such as to correct funding classification or lines of accounting errors. Because a contract reconciliation would be performed only if an error related to a specific contract was found, contracts and the associated disbursements that did not have identified errors would not be subject to this review. As a result, this process was not intended to identify new, undetected contract overpayments as envisioned by the Recovery Auditing Act. In addition, we noted that DCMA’s and DCAA’s contract closeout processes were designed to ensure that applicable administrative actions had been completed during the course of a contract (e.g., all classified documents were disposed of) and not to specifically identify contract overpayments. The DOD OIG has reported instances where the department was unable to reconcile and close out contracts due to missing documentation and staff turnover. Further, although DOD considered DCAA contract audits an integral part of its recovery audit program, DCAA officials pointed out that because recovery auditing is a review of DOD components’ books and records, DCAA would generally have no role since its audits primarily focus on contractors’ records. Moreover, based on our review, DOD’s internal guidance did not identify the applicable payment and accounting systems to be reviewed, the frequency of this review, and the applicable roles and responsibilities at DFAS and the military services processing commercial payments, including coordination of these efforts. We found no discussion on how DOD would leverage existing audits to identify commercial overpayments performed at military service audit agencies, such as the Army Audit Agency. Further, the guidance did not include specific actions that addressed OMB’s recovery auditing reporting requirements, including actions to develop a corrective action plan to address the root causes of payment errors and steps to measure the total cost of the agency’s recovery auditing program. In fiscal year 2009, DOD acknowledged the need to clarify and update its guidance and has efforts underway to revise the recovery auditing chapter. DOD further reported in its fiscal year 2007 AFR that it had actions underway to implement a Business Activity Monitoring (BAM) service to provide a real-time or near real-time automated mechanism for analyzing transactions to prevent and reduce the risk of duplicate payments and other types of errors. DFAS believes this new process will reduce the need for internal postpayment reviews of commercial payments by identifying errors before payment occurs. DFAS anticipates DOD-wide implementation of BAM during fiscal year 2009. Until the department establishes processes specifically designed to address recovery auditing and updates its internal guidance, it will be unable to determine the extent to which contract overpayments exist and are subsequently recovered to fulfill the Recovery Auditing Act requirements. DOD did not fully address OMB recovery auditing reporting requirements in its AFR, such as disclosing the total costs associated with its recovery auditing activities and the associated recovered amounts related to overpayments made to vendors. DOD’s guidance did not include the specific recovery auditing reporting requirements identified in OMB’s guidance. The following describes the status of DOD’s actions on each of the nine reporting elements under OMB’s recovery auditing reporting requirements: A general description and evaluation of the steps taken to carry out a recovery auditing program. DOD reported a general description of the steps taken to implement its recovery auditing program, but did not provide an evaluation. The total cost of the agency’s recovery auditing program. DOD informed us that it did not report total costs because it was unable to calculate the amount. DOD officials told us that the agency did not have cost accountants and thus lacked the expertise needed to calculate the total cost of the agency’s recovery auditing program, particularly the costs of the agency’s internal recovery efforts (agency salaries and expenses). The total amount of contracts subject to review. DOD did not report the full amount subject to review. Specifically, DOD excluded from its AFR $20.5 billion of its commercial payment universe for fiscal year 2007. According to Office of the Comptroller officials, it did not include $20.5 billion for the U.S. Army Corps of Engineers and Army Europe in the universe of commercial payments because of an oversight error. The $20.5 billion represents commercial payments processed by Army Europe and the U.S. Army Corps of Engineers. If the $20.5 billion amount had been included, the total universe of reported commercial payments would have increased to $340.3 billion. The actual amount of contracts reviewed. As stated above, DOD excluded $20.5 billion from the full amount actually reviewed because of an oversight error. The amounts identified for recovery. DOD reported the overpayments identified for recovery for contractors, but not for vendors. DFAS officials told us that they did not report for vendor payments the amounts identified for recovery and actually recovered because the department did not have a process to separate and quantify DOD-identified vendor overpayments from contractor- identified vendor overpayments. In July 2007, DFAS introduced an automated process—the Contractor Debt System (CDS)—to track DOD-identified overpayments to vendors, but CDS was not fully deployed by the time DOD issued its fiscal year 2007 AFR. The amounts actually recovered in the current year. DOD reported the associated recovered amounts for contractor overpayments, but did not report similar information for vendors. As stated above, DOD did not report this information because it lacked the processes needed to distinguish between DOD-identified and contractor-identified vendor overpayments. A corrective action plan to address the root causes of payment errors. DOD did not report on its corrective action plan to address the root causes of payment errors. We requested—but DOD did not provide—its corrective action plan to reduce commercial overpayments. A general description and evaluation of any management improvement program carried out as part of its recovery auditing program. DOD reported a general description of an initiative— Business Activity Monitoring service—it planned to implement to reduce overpayments. However, DOD did not report a general evaluation of this initiative, because it had not been implemented. A description and justification of the classes of contracts excluded from recovery auditing review by the agency head. This reporting element is not applicable as DOD officials told us that the department reviewed all classes of contracts as part of its recovery auditing program. DOD also could not substantiate the reported $18.9 million of DFAS recovered contract overpayments for fiscal year 2007, because it did not maintain the underlying documentation that supported the amount. DOD was unable to recreate the documentation because the system data reflected real-time information and changed daily. Although the Office of the Comptroller reported commercial overpayment data for the Navy for fiscal year 2008, it did not do so for fiscal year 2007 and was unable to confirm whether it should have reported comparable data for that period. Office of the Comptroller officials told us that they did not follow up with the Navy to determine why it did not report recovery audit information for fiscal year 2007. Our internal control standards related to control activities state that all transactions and other significant events need to be clearly documented and should be readily available for examination, and that documentation and records should be properly managed and maintained. Until DOD reports the required information and ensures the accuracy of the information it does report, the extent to which Congress, OMB, and other oversight bodies can rely on this information to make informed decisions is questionable. The Office of the Comptroller’s oversight and monitoring of DOD’s recovery auditing activities were inadequate as they indicated that they had not verified the accuracy and completeness of the information reported in DOD’s AFR. These activities were generally limited to compiling data received from DOD agencies and the military services and performing a fluctuation analysis of these data to identify changes in amounts between the current and prior year. We found that the roles and responsibilities for the Recovery Auditing Project Officer, who was tasked with overseeing DOD’s recovery auditing efforts, were not documented in the November 2006 memorandum from the Office of the Comptroller establishing the position. In addition, the project officer devoted minimal time—about 10 percent for fiscal year 2007—to overseeing DOD’s recovery auditing efforts, and frequent turnover occurred with this position. Between fiscal years 2007 and 2008 we noted that four different people had been assigned to oversee DOD’s recovery auditing program. Our internal control standards for monitoring provide that processes should generally be designed to ensure that ongoing monitoring occurs in the course of normal operations and include regular management and supervisory activities, comparisons, and reconciliations. Further, our internal control standards provide that controls should include a wide range of diverse activities including verification of information and be aimed at validating the propriety and integrity of both organizational and individual performance measures and indicators. Also, excessive turnover could significantly impact the department’s ability to sustain the knowledge, skills, and experience needed to effectively oversee implementation of the Recovery Auditing Act requirements. Office of the Comptroller officials acknowledged that the recovery audit data submitted by the DOD agencies and military services were not independently validated to ensure that the information was accurate, complete, and met the minimum reporting requirements. In December 2007, DOD established a recovery auditing working group comprised of representatives from the DOD agencies and military service components to identify best practices for recovery auditing; however, this group has yet to meet. At the DOD component level, we were informed that Navy, on its own initiative, established a working group in fiscal year 2008 to identify and recover commercial overpayments and report this information to the Office of the Comptroller. Without adequate monitoring and oversight, the department does not have adequate assurance that its future reporting under the Recovery Auditing Act will be accurate and complete. DOD has not established the mechanisms—processes and detailed implementing guidance—needed to effectively implement the requirements for both IPIA and the Recovery Auditing Act. The department reports that its payment activities are at low risk for improper payments without adequate supporting analysis and documentation and despite its history of long-standing financial management weaknesses. Because addressing IPIA requirements is a sequential process, DOD’s failure to conduct comprehensive risk assessments, which is the first step, has adversely affected decisions made to address subsequent steps in the process. DOD has not accurately portrayed the full extent of improper payments or the associated root causes. As a result, any corrective actions taken are likely to fall short of fixing the problems that resulted in these errors. With regard to recovery efforts, DOD continues to rely on processes that are inadequate for identifying the extent of overpayments to contractors and vendors and ensuring that these amounts are recovered. Until the department takes definitive action to fulfill the requirements of these acts and implement preventive internal controls, it is at risk of making improper payments and wasting taxpayer funds. To improve DOD’s efforts to address improper payment and recovery auditing requirements, we recommend that the Secretary of Defense direct the DOD Comptroller to take the following 13 actions. For IPIA, the DOD Comptroller should Establish and implement a systematic approach, as a part of the risk assessment process, to ensure all programs and activities are reviewed to determine susceptibility to improper payments. Develop and implement detailed guidance for conducting risk assessments, including the steps to determine if risk exists, what those risks are, and the potential or actual impact of those risks on program operations. Require DOD agencies and the military services to document the risk assessment methodology used, including the risk factors considered, and the rationale for assessing the risk level for the payment activity. Develop and implement a statistically valid methodology to estimate and report commercial improper payments (contract and vendor over- and underpayments). This methodology should include all payment errors regardless of the source of the error—DOD, contractors, or vendors—as required by IPIA. Identify and fully disclose the root causes of improper payments annually in the AFR. Identify and fully disclose the corrective actions, and monitor the corrective actions to ensure that they address applicable root causes. Perform oversight and monitoring activities to ensure the accuracy and completeness of the improper payment data submitted by the DOD agencies and the military services for inclusion in the AFR. For Recovery Auditing Act, the DOD Comptroller should Establish and implement processes specifically designed to identify and recover commercial overpayments. Develop and implement detailed guidance to assist DOD agencies and the military services in effectively carrying out recovery audits and activities, including the payment and accounting systems to be reviewed, the frequency of these reviews, applicable roles and responsibilities, and reporting requirements. Establish and implement a process to identify costs related to the department’s recovery auditing program, including costs for employees’ salaries. Establish and implement a process to identify and report vendor overpayments and the associated recovered amounts. Maintain documentation to support the amounts reported in the AFR to allow for independent evaluation of this information. Perform oversight and monitoring activities to ensure the accuracy and completeness of the recovery auditing data submitted by the DOD agencies and the military services for inclusion in the AFR. Also, document the roles and responsibilities of the Recovery Auditing Project Officer. DOD provided written comments on a draft of this report which are reprinted in their entirety in appendix IV. DOD also provided technical comments that we have incorporated as appropriate. In its written comments, DOD disagreed with all but 1 of our 13 recommendations designed to strengthen its improper payment and recovery auditing processes. DOD stated that generally the actions envisioned by our recommendations were already being accomplished within the department or were not required by OMB and thus, such direction from GAO was not necessary. We disagree. While DOD presently has efforts underway, as noted in this report, it has not yet established the processes and detailed guidance for effectively implementing either IPIA or the Recovery Auditing Act. In its comments, DOD did not provide any new evidence that was not considered in our report. Accordingly, we continue to believe that our recommendations are critical for DOD to enhance its efforts to minimize improper payments and recover those that are made. The following paragraphs illustrate the nature of DOD’s comments and our analysis of its key points. DOD disagreed with our three recommendations aimed at enhancing DOD’s risk assessment processes. We recommended the DOD Comptroller require DOD agencies and the military services to establish and implement risk assessment methodologies, along with documentation of key factors considered and the rationale for assessing the risk level for the payment activity. DOD stated that such direction was not necessary as it had established IPIA program baselines and measures and reports on all of its IPIA programs annually in accordance with OMB guidance. As described in our report, DOD’s risk assessment process was inadequate to ensure that appropriate consideration was given to the risks associated with its payment activities as we found an additional $322 billion in DOD outlays that had not been assessed under IPIA. For payment activities assessed, DOD did not require its agencies and military services to document their risk methodologies, including risk factors considered, the potential or actual impact on their program operations, and the rationale for assessing risk as either low, medium, or high. As such, none of the nine DOD components that conducted risk assessments described their methodology or rationale for the low level of risk assigned to each applicable payment activity. Given the lack of supporting documentation and evidence for the risk assessments as well as DOD’s history of long-standing internal control weaknesses, including GAO’s prior designation of eight functional DOD areas as high risk, the low risk levels are not based on sufficient analysis, are likely unrealistic, and are not reflective of the wide range of vulnerabilities that exist within DOD. DOD also disagreed with our recommendation that it develop and implement a statistically valid methodology to estimate and report commercial improper payments (contract and vendor over- and underpayments). DOD stated that it has followed guidance provided by OMB and that commercial improper payments are to be identified, recovered and reported in accordance with the Recovery Auditing Act. As described in our report, DOD stated that reporting improper commercial payments under IPIA would create duplicate reporting because this information was captured as part of DOD’s efforts to address Recovery Auditing Act requirements. However, we disagree because those actions are not sufficient to address IPIA. Both acts must be addressed with regard to commercial payment activity. Each act has a different scope of review and reporting requirements. Based on the improper payment definition under IPIA and OMB’s guidance that instructs agencies to develop a statistically valid estimate, the statistical sampling requirement would apply to commercial payments under IPIA. Developing an across- the-board, systematic estimate of the extent of improper payments gives management baseline information for measuring progress in reducing improper payments and how much investment in new internal controls would be cost-justified. DOD disagreed with our two recommendations to identify and fully disclose in the AFR the root causes of improper payments and the corrective actions, including monitoring those actions to ensure that they address applicable root causes. DOD commented that the AFR was not the appropriate forum for this detailed level information and that it had procedures in place to identify, fully disclose, and monitor corrections. We have two main concerns with DOD’s responses to these recommendations. First, DOD did not consistently follow OMB reporting requirements to identify root causes and related corrective actions and the underlying documentation of the reported corrective actions lacked details as to the corrective actions taken or planned and the corresponding results, if any. Second, because of the inherent responsibility to be a good steward for public resources, it is important that corrective actions and the effectiveness of such be openly communicated or available not only to the Congress and agency management but also to the general public. Balancing the benefits of summarizing information with reporting compliance and user needs is critical. Corrective actions cannot be effectively monitored and assessed unless the detailed corrective actions are known and tied to the root cause(s) of improper payments that they are intended to address. We made five recommendations aimed at strengthening DOD’s recovery audit processes related to establishing and implementing processes specifically designed to identify and recover commercial payments, developing detailed guidance to carry out recovery audits and activities, identifying costs related to its recovery auditing program, implementing a process to identify and report vendor overpayments and associated recovered amounts, and maintaining documentation to support reported amounts. DOD concurred with our recommendation to identify the cost related to its recovery audit program. DOD disagreed with the other four recovery auditing recommendations because it believed the agency had already established and implemented such processes. However, as we point out in this report, the majority of DOD’s processes aimed at identifying and recovering improper payments were inadequate because the primary purpose of these processes were not to identify commercial improper payments as required by the Recovery Auditing Act. For example, DCMA and DCAA’s contract closeout processes were designed to ensure that applicable administrative actions had been completed during the course of a contract (e.g., all classified documents were disposed of) and not to specifically identify contract overpayments. In addition, DOD’s current FMR guidance (dated December 2005) did not include specific elements that would be necessary to effectively carry out a recovery auditing program. Further, DOD had not established a process to fully identify and report vendor overpayments. This problem continued to exist as DOD acknowledged in its fiscal year 2008 AFR that while it was able to identify DOD-identified overpayments for its DFAS component, it was unable to identify and report vendor overpayments for all of its components and that efforts would continue until all DOD components achieved this capability. DOD disagreed with our two recommendations related to oversight and monitoring and commented that they were duplicative, except for the additional language related to documenting the roles and responsibilities of the Recovery Auditing Project Officer. We clarified our recommendations related to oversight and monitoring to emphasize the need of these activities for both IPIA and the Recovery Auditing Act. As we stated in our report, the DOD Office of the Comptroller’s oversight and monitoring of improper payment and recovery auditing activities were inadequate as the office did not verify the accuracy and completeness of information received from DOD agencies and military service components and reported in its AFR. We are sending copies of this report to interested congressional committees; the Secretary of Defense; and the Director, Office of Management and Budget. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9095 or by e-mail at dalykl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. The objectives of this report were to determine whether the Department of Defense (DOD) had adequate controls in place to address the Improper Payments Information Act (IPIA) and the Recovery Auditing Act requirements.To determine whether DOD adequately addressed IPIA requirements, we reviewed the applicable legislation and related OMB implementing guidance.We further reviewed DOD’s agency financial reports (AFR) for fiscal years 2004 through 2008, internal DOD improper payment guidance, and prior GAO and DOD Office of Inspector General (DOD OIG) reports on improper payments. We reviewed these documents to understand DOD’s efforts to address IPIA requirements and to identify previously reported issues with DOD’s improper payment reporting. In addition, we performed the following work: To assess DOD’s IPIA risk assessment process to identify payment activities susceptible to significant improper payments, we reviewed our Standards for Internal Control in the Federal Government and executive guide on Strategies to Manage Improper Payments: Learning from Public and Private Sector Organizations as guidance to assess DOD’s internal controls over disbursements. We interviewed agency officials such as the Project Officer for Improper Payment and Recovery Auditing, and obtained and reviewed fiscal year 2007 IPIA responses, where available. We also compared the amounts reported as the basis for improper payments to other documentation such as the President’s Budget and Statement of Budgetary Resources to determine whether all DOD outlays were subject to improper payment assessments. To assess the statistical validity of DOD’s reported improper payment estimates for fiscal year 2007, we conducted an independent analysis of its sampling methodologies, including a review of the sampling plans for each DOD agency and military service component. In addition, we performed an independent assessment of DOD’s IPIA processes, including a legal analysis of the improper payment definition in relation to DOD’s classification of commercial payment errors (contract and vendor payments) as either improper or proper to determine whether DOD had reached appropriate conclusions. In addition, we interviewed the IPIA Project Officer, Defense Finance Accounting Service (DFAS)-Kansas City, and DFAS-Columbus officials, to identify, estimate, and reduce improper payments and reviewed supporting documentation, when available, in order to gain an understanding of DOD’s IPIA process. We also interviewed DOD OIG officials to discuss their findings and recommendations related to DOD’s efforts to address IPIA requirements. To assess DOD’s corrective action plans to reduce improper payments, we interviewed agency officials, reviewed corrective actions to reduce improper payments, and reviewed corrective action plans to determine whether appropriate linkages existed between the root causes of improper payments and specific corrective action steps. We also performed an analysis of DOD’s improper payment error rates to determine whether the improper payment error rates for DOD payment activities had changed from fiscal year to fiscal year. To assess the accuracy and completeness of DOD’s reported fiscal year 2007 improper payment amounts, we recalculated summary amounts included on DOD’s IPIA survey and traced those amounts to supporting documentation. To determine whether DOD had adequately addressed the Recovery Auditing Act requirements, we reviewed applicable legislation and related OMB implementing guidance, DOD’s AFR for fiscal years 2004 through 2008, internal DOD recovery auditing guidance, and prior GAO and DOD OIG reports on recovery auditing. We interviewed agency officials such as the Recovery Auditing Project Officer, the Director of Internal Review, and the Chief of DFAS’s Debt Management Office regarding DOD’s process to identify and recover commercial overpayments and reviewed accompanying and supporting documentation, when available. We also interviewed Defense Contract Audit Agency (DCAA) and Defense Contract Management Agency (DCMA) officials such as the DCAA Headquarters Program Manager of the Policy Programs Division and DMCA contract specialists to determine their role in DOD’s recovery auditing process and reviewed applicable guidance.Further, we interviewed DOD OIG officials such as the DOD OIG Program Director and the Audit Project Manager at DFAS-Columbus to discuss their findings and recommendations related to DOD’s efforts to address recovery auditing requirements. Also, we interviewed Department of the Navy officials regarding results of the recovery audit performed to identify overpayments made in its telecommunications program. In addition, we interviewed TRICARE Management Activity (TMA) officials to obtain clarification and supporting documentation on the healthcare-recovered amounts reported in DOD’s AFR. To assess the accuracy and completeness of DOD’s reported fiscal year 2007 recovery audit information, we reviewed DFAS and TMA supporting documentation submitted to the Office of the Comptroller to substantiate amounts reported in the AFR. We traced these schedules and total amounts submitted to the Office of the Comptroller back to various supporting breakdowns (at the transaction level). In addition, we recalculated and verified the accuracy of the recovery audit amounts in DOD’s summary recovery auditing table. We conducted site visits at two of the five DFAS processing center locations (Kansas City, Missouri and Columbus, Ohio). We selected the DFAS-Kansas City site because it was responsible for receiving IPIA survey information from other DFAS sites, compiling the information, and checking the information for accuracy and completeness. As part of this site visit, we obtained an understanding of DFAS’s process for conducting monthly postpayment reviews of military and civilian pay to identify improper payments. In addition, we selected the DFAS-Columbus site because it processed a majority of DOD’s commercial payments—the agency’s largest payment activity—on behalf of the DOD agencies and military services. Also, DFAS-Columbus was the only DFAS site that processed DOD contract payments. At the DFAS-Columbus site, we obtained an understanding of the commercial prepayment and postpayment controls in place affecting IPIA and Recovery auditing requirements. To determine the reliability of DOD’s improper payment and recovery audit information, we interviewed knowledgeable agency officials, such as the DFAS-Indianapolis Director of Accounts Payable and DFAS- Indianapolis Accounts Receivable specialists, to ascertain the procedures used to assume the quality of the data. We reviewed DOD’s commercial payment activity from its contract and vendor pay systems and its Improper Payments Online Database (IPOD) that stored the improper payment information. We also traced data back to supporting documentation, including DOD’s fiscal year 2007 IPIA survey, the AFR, and the recovery auditing activity schedule. We performed a data reliability assessment of DOD’s statistical sampling methodologies for the fiscal year 2007 reported improper payment estimates, (see appendix III). We concluded that the data were reliable for our purposes. We conducted our audit work from June 2008 to June 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions, based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We obtained written comments on a draft of this report from the Under Secretary of Defense (Comptroller) and have summarized these comments in the Agency Comments and Our Evaluation section of this report. In its fiscal year 2007 Agency Financial Report (AFR), the Department of Defense (DOD) reported on five payment activities as part of its Improper Payments Information Act (IPIA) reporting: military pay, military health benefits, civilian pay, military retirement, and travel pay. The information reported in DOD’s AFR is compiled from the IPIA survey submitted by the DOD agencies and military services. DOD agencies and military services used related confidence levels over different ranges (generally, as prescribed in OMB guidance) to plan and estimate improper payment amounts reported in DOD’s fiscal year 2007 AFR. We reviewed the sample plans for each of the five payment activities, at the component level, and determined those methodologies generally complied with the Office of Management and Budget’s (OMB) implementing guidance. OMB guidance requires that applicable agencies estimate the gross total of both over- and underpayments for those programs and activities identified as susceptible to significant improper payments. OMB also requires that the estimates be based on a statistically valid random sample of sufficient size to yield an estimate with a 90 percent confidence interval of plus or minus 2.5 percentage points around the estimate of the percentage of improper payments. Alternatively, agencies may use a 95 percent confidence interval of plus or minus 3.0 percentage points around the estimate of the percentage of improper payments to estimate improper payments for agency programs. If an agency cannot determine whether a payment was proper because of insufficient documentation, OMB guidance requires that the payment be considered an error. A brief description of each payment activity’s methodology reported in its sampling plan is provided below. The military health benefits program consists of disbursements for the medical care of active duty military personnel, retirees, their family members, and family members of deceased service members. TRICARE Management Activity (TMA) processed all military health benefit payments for DOD. To estimate military health benefits improper payments, TMA selected samples from the two populations of its contract payments, as shown in table 4 below. The contract samples were drawn on a quarterly basis and stratified by dollar value. For both contract types sampled, denied payment samples were based on the amount billed and nondenied payment samples were based on government costs. The military pay program consists of military payroll disbursements. The Defense Finance and Accounting Services (DFAS) processed all military payroll payments for DOD. To estimate improper payments, DFAS-Kansas City conducted monthly postpayment reviews to determine the accuracy of net military pay using a simple random attribute sample and summed monthly results to calculate an annual estimate. In addition to its statistical sample, the military pay program’s estimate of improper payments included actual data on improper payment amounts. Table 5 shows information reported in the sampling plan for military pay. The civilian pay program consists of civilian payroll disbursements. DFAS, Army, and Navy processed civilian payments in fiscal year 2007. DFAS- Kansas City conducted monthly postpayment reviews to determine the accuracy of net pay using simple random attribute samples. In addition to monthly samples, DFAS added actual improper payment data to further enhance its estimate. DFAS monthly results were summed to calculate the annual estimate. Army’s sample plan consisted of annual postpayment reviews and analysis of a sample of disbursements, and Navy’s sample plan consisted of a statistical sample of Military Sealift Command Civilian Mariners payments. Table 6 shows detailed sampling plans for each component of the civilian pay program. The military retirement program consists of disbursements to military retirees and annuitants. DFAS processed all military retirement payments for DOD. DFAS-Cleveland performed monthly postpayment reviews to determine the accuracy of payments using simple random samples. Three samples were conducted to assess the accuracy of payments—one for deceased retirees, the other for retired accounts, and the third for annuitant accounts, as shown in table 7. The deceased retirees sample is designed to identify retiree payments going to deceased individuals, while the retired and annuitant samples identify whether regular payments are accurate. The following components processed travel payments: DFAS, Army, Navy, and Air Force. Table 8 shows detailed sample plans for each component of the travel pay program. DFAS-Indianapolis conducted random monthly reviews to determine accuracy of payments and summed monthly results to arrive at an annual sample.Army travel pay consisted of Army Korea, Army Europe, and the Army Corps of Engineers. Army Europe and Army Corps of Engineers components conducted monthly postpayment reviews of travel payments. Army Korea did not provide sampling results. Navy conducted a statistical sample of travel payments processed through its system. Air Force conducted post audit reviews of a random sample of travel payments. In addition to the contact named above, Carla Lewis, Assistant Director; Sharon Byrd; Francis Dymond; Vanessa Estevez; Patrick Frey; Jason Kirwan; Crystal Lazcano; Sophie Simonard-Norman; Pamela Valentine; and David Yoder made key contributions to this report.
The Department of Defense (DOD) is required, as are other federal executive agencies, to report improper payment information under the Improper Payments Information Act of 2002 (IPIA) and recovery auditing information under section 831 of the National Defense Authorization Act for Fiscal Year 2002, commonly known as the Recovery Auditing Act. The DOD Office of Inspector General has previously reported deficiencies at DOD related to these acts and GAO's prior work on DOD's reporting of its fiscal year 2006 travel improper payments estimate also identified shortcomings. Because of these and other long-standing weaknesses, the subcommittee asked GAO to examine DOD's fiscal year 2007 improper payment and recovery audit reporting to determine whether adequate processes existed to address both statutory requirements. To complete this work, GAO reviewed DOD's annual reports, conducted site visits, and met with cognizant DOD officials. DOD's process for addressing IPIA requirements had significant weaknesses. For example, as shown in the figure below, DOD did not conduct risk assessments for all of its payment activities as $322 billion in agency outlays were excluded from the amounts assessed. For those payment activities reviewed, DOD assessed the risk of improper payments occurring as low despite the department's long-standing financial management weaknesses and could not provide documentation supporting the methodologies used and the final risk level. GAO also found that DOD did not estimate improper payments for commercial pay under IPIA requirements, its largest payment activity. Further, the Office of the Comptroller's oversight and monitoring activities were inadequate because they did not include verifying the accuracy and completeness of the information in the agency's financial report (AFR). In addition to not estimating improper payments for commercial pay, DOD's processes for identifying and recovering commercial overpayments were inadequate, because they were not designed for this purpose as required by the Recovery Auditing Act. For example, GAO found that contract closeout processes were designed to ensure that applicable administrative actions had been completed (e.g., all classified documents were disposed of) and not to specifically identify contract overpayments. DOD also lacked detailed guidance on how to conduct a recovery audit program and did not fully address the recovery auditing reporting requirements in its AFR, such as disclosing the total cost associated with its recovery auditing activities. The Office of the Comptroller also did not verify the accuracy and completeness of the recovery audit information in the AFR, which resulted in $20.5 billion being excluded from its universe of commercial payments. DOD stated that its processes were sufficient to address the requirements of both acts, but since then has taken some actions, such as updating relevant guidance. Until these critical deficiencies are addressed, DOD will be unable to determine the extent to which improper payments exist and are subsequently recovered.
Producing the Memorial Day and Fourth of July concerts involves obtaining funding; determining each concert’s artistic program; arranging for performing artists, talent, and production staff; and arranging for the television broadcast of the concerts. CCI received both federal and nonfederal funding to support the production of the 2012 through 2014 Memorial Day and Fourth of July concerts. CCI received federal funding from NPS through a cooperative agreement. As shown in figure 1, funding provided through the cooperative agreement included funding that NPS received through an interagency agreement with the Department of the Army (Army) and funding from NPS’s own budget through the National Capital Area Performing Arts Program. CCI obtained nonfederal funding through corporate sponsorship agreements, grant agreements with the Corporation for Public Broadcasting, and license agreements from the Public Broadcasting Service (PBS). CCI also received funding from other sources, such as interest revenue, but these sources represent a minor portion of total funding. CCI received from $8.5 million to $9.2 million and disbursed from $8.3 million to $8.9 million each year to produce the 2012 through 2014 Memorial Day and Fourth of July concerts. We found that CCI received funding for the 2012 through 2014 Memorial Day and Fourth of July concerts ranging from $8.5 million to $9.2 million each year. For purposes of our report, we classified funding into three categories: federal, nonfederal, and other funding sources. These three categories are presented in table 1. As shown in figure 2, CCI’s receipts for the 2012 Memorial Day and Fourth of July concerts totaled $8.5 million, $8.7 million for the 2013 concerts and $9.2 million for the 2014 concerts. Over the 3-year period, the percentage of funding CCI received from federal sources slightly decreased, even though the dollar amount remained consistent. The amount of funding CCI received from nonfederal sources over the 3-year period increased slightly in both dollar value and percentage. The increases were mainly attributable to an increase in funding received through the license agreements with PBS. Other funding sources remained consistent over the 3-year period. CCI disbursed from $8.3 million to $8.9 million each year to produce the 2012 through 2014 Memorial Day and Fourth of July concerts, approximately 1.1 to 3.7 percent less than the annual funding received. CCI retains the excess of receipts over disbursements to use toward future concerts in accordance with the cooperative agreements to produce the concerts. CCI records transactions in its accounting system under three main account series (i.e., Memorial Day concert, Capitol Fourth concert, and general and administrative) and uses detailed subaccounts to classify transaction types. We used the account descriptions from the subaccounts to classify disbursement transactions into six categories to present the types of disbursements made in producing the concerts. These six categories are presented and defined in table 2. As shown in figure 3, CCI disbursed $8.4 million for the 2012 Memorial Day and Fourth of July concerts, $8.3 million for the 2013 concerts and $8.9 million for the 2014 concerts. Over the 3-year period, there were few changes among the proportion of disbursements made by CCI in the overall production of the Memorial Day and Fourth of July concerts. Production staff is CCI’s largest disbursement category, and it accounted for 31 percent of all disbursements across the 3-year period. The largest change in proportion occurred in 2013 when technical equipment rose 3 percent. According to CCI, this was due to upgrades made to the band shell. During our review, we found that the receipt transactions tested for all 3 years were supported by adequate documentation, approved by authorized management, and recorded in the appropriate year. Similarly, we found that the disbursement sample items tested for all 3 years were supported by adequate documentation and recorded in the appropriate year. However, we found that CCI did not effectively follow its existing policy for documenting the approval of payments made by check in 2013. We found that all receipt transactions tested (21 transactions for 2012, 27 transactions for 2013, and 25 transactions for 2014) were supported by adequate documentation, approved by authorized management, and recorded in the appropriate year. Specifically, we found that the receipt transactions tested were supported by signed cooperative agreements or contracts and deposit records. We traced all federal and nonfederal receipts tested to their respective cooperative agreements or contracts and verified the amounts agreed without exception. We also found that all receipt transactions tested were classified appropriately in the accounting system and recorded in the appropriate year. During our review, we found that all disbursement sample items tested (66 transactions per year) were supported by adequate documentation and recorded in the appropriate year, but management approval controls over certain payments were not implemented effectively for one of the years tested. All disbursement transactions tested were supported by contracts or vendor invoices and payment records, and management approval controls over payments were implemented effectively for the populations tested in 2012 and 2014. However, in 2013, we found 3 transactions that were not approved in accordance with CCI’s policy. CCI’s Procurement and Accounts Payable Policy, section D, provides that invoices less than $500 may be approved and paid for with just one signature on the related check. For all other payments made by check, the checks should be signed by two of the following authorized managers: President, Vice President, Chief Financial Officer, or Treasurer. There are, however, temporary exemptions to the two-signature rule. For each year under review, CCI provided us with the memos that establish these exemptions, titled “Accounts Payable Policy Temporary Exemptions.” These memos state that for certain periods, specifically, July through October, authorized managers may be out of the office and not available to satisfy the two-signature requirement on checks equal to or over $500. In such instances, a list of vendors with amounts due is to be e-mailed or faxed to one of the out of office managers for approval. CCI is to retain the e-mail or fax containing the approval as evidence of a secondary approval for checks equal to or over $500 issued with only one signature during the July through October time frame. In 2013, we found 3 transactions totaling over $12,000 that were paid with checks over $500 that contained only one signature, and the dates on these 3 checks did not fall within the stated policy exemption period. For 1 of the 3 checks, CCI provided us with an e-mail approval. However, this was not consistent with CCI’s exemption policy as the check was approved and issued in December 2013, which fell outside of the exemption period. For the other 2 checks, no explanation or evidence of secondary approval on the check was provided. Furthermore, while the test of the 2012 population demonstrated effective implementation of management approval controls over payments, we identified one error that also related to a missing signature on a check outside the exemption period. Because of the large dollar value of the error ($43,550) and related risks of improper payments, fraud, and abuse, we believe the error warrants management’s attention. These errors occurred, in part, because CCI’s Procurement and Accounts Payable Policy has not been updated since November 2008 and does not include all internal control activities to be performed when authorized management may be out of the office and regular check authorization procedures cannot be followed. CCI issued separate memos documenting exemption periods for each of the years under review. However, these memos were not incorporated into CCI’s policy. Furthermore, these memos only covered a certain time frame each year, but instances were occurring outside these stated time frames. CCI added that where checks had only one signature, the associated invoices had been approved. While we received copies of the invoices showing approval, the signatures on the invoices did not include a date. Therefore, we were not able to determine whether these transactions were approved prior to payment and, as a result, could not rely on the invoice approval as a compensating control. In addition, CCI’s Procurement and Accounts Payable Policy does not refer to replacing check signatures with invoice approvals. Without incorporating all control activities over payment approvals into its policy and procedures, CCI increases its risk that the control activities will not be implemented consistently and that improper payments, fraud, and abuse may not be prevented or detected. CCI, a private nonprofit organization, relies on federal and nonfederal funding to put together concerts that honor military services and celebrate America’s independence. Over the 3-year period of our review, the amount of funding CCI received increased slightly, mainly attributable to a nonfederal source, and disbursements were slightly less than funds received. CCI’s 2012 through 2014 concert receipts were adequately documented, properly approved by authorized management, and recorded in the appropriate year. In addition, CCI’s 2012 through 2014 concert disbursements were adequately documented and recorded in the appropriate year. However, we identified certain disbursements in 2013 that were not properly authorized in accordance with CCI’s policy. Without incorporating all control activities over payment approvals into its policy and procedures, CCI increases its risk that improper payments, fraud, and abuse could not be prevented or detected. We recommend that CCI’s Chief Financial Officer update its existing Procurement and Accounts Payable Policy to fully document CCI’s management approval controls over payments made by check, including exemptions to regular procedures. This should include approval procedures to be followed during periods when only one authorized manager is available to sign checks for payment. We provided a draft of this report to CCI for comment. In its written comments, reprinted in appendix II, CCI stated that it views our report as a favorable review of its finances. For the four disbursements we found in which there was only one signature on a check, CCI stated in its comments that it provided us with records that indicated that these disbursements were properly approved by management. However, as noted in this report, checks outside of the exemption period should have two signatures per CCI’s policy. While CCI provided invoices which it stated showed evidence of approval for these disbursements, as we noted in this report, the associated invoices did not have an approval date and CCI’s policy does not make reference to replacing check signatures with invoice approvals. Nonetheless, CCI stated that its Board of Directors will take appropriate action to address our recommendation. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 3 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, Capital Concerts, Inc., and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3406 or malenichj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our objectives were to examine (1) how much funding, federal and nonfederal, was received and used by Capital Concerts, Inc. (CCI) for the 2012 through 2014 Memorial Day and Fourth of July concerts and (2) to what extent CCI’s recorded receipts and disbursements related to these concerts were supported by adequate documentation, approved by authorized management, and recorded in the appropriate year. The scope of our audit was limited to the receipts and disbursements of CCI as the majority of funding received and disbursed to produce the Memorial Day and Fourth of July concerts held on U.S. Capitol Grounds rests with this organization. Other minimal costs are incurred by the National Park Service and the National Symphony Orchestra. To examine how much funding, federal and nonfederal, was received and used by CCI for the Memorial Day and Fourth of July concerts, we obtained the populations of receipt and disbursement transactions from CCI’s accounting system for 2012 through 2014. To verify the completeness and accuracy of these populations, we performed data reliability procedures by reconciling population totals to the trial balances and then comparing the trial balances to the audited financial statements. We then used the account descriptions in CCI’s accounting system to classify the transactions into categories for reporting purposes. CCI classifies revenue on its financial statements as public support, license fee, earnings on deferred compensation/retirement plan, royalty fees and other income, and interest income. For the purposes of this report and to provide the requesters more useful information about the sources of funding received, we chose to categorize revenues as federal funding, nonfederal funding, and other funding sources. To classify transactions into these three categories, we used the revenue subaccount descriptions. We obtained CCI management’s concurrence with our classifications. CCI classifies expenses on its financial statements into three categories: Memorial Day concert, Capitol Fourth concert, and general and administrative. These three categories do not capture the various types of disbursement activities CCI undertakes. CCI uses over 500 different subaccounts to classify its disbursements. To provide the requesters with more useful information, we categorized disbursements into six categories: production staff, technical crew and postproduction, technical equipment, talent, promotion, and other administrative and miscellaneous. To classify transactions into these six categories, we used the disbursement subaccount descriptions. We obtained CCI management’s concurrence with our classifications. To examine to what extent CCI’s recorded receipts and disbursements related to the Memorial Day and Fourth of July concerts were supported by adequate documentation, approved by authorized management, and recorded in the appropriate year, we met with officials from CCI to discuss the specific nature and characteristics of their organization’s receipts and use of concert-related funding. We also discussed the manner in which they record and track concert-related receipts and disbursements in their system of records, obtained their receipt and disbursement policies and procedures, and performed walk-throughs to gain an understanding of the flow of information throughout the organization. Based on the understanding we gained, we developed data collection instruments to test the receipts and disbursements. To identify our populations for testing, we first obtained from CCI all transactions that made up total recorded revenue and those that made up total recorded expenses for each year. We then removed transactions that canceled each other (offset transactions), and for receipts we also removed transactions with abnormal balances (debit transactions) and reviewed these transactions separately to determine the nature of the transactions and why they occurred. After removing these, the remaining transactions for each year made up the populations used for the sampling methodology described below. For receipts, we selected the largest receipt transactions that collectively represented at least 95 percent of the total dollar value of each year’s population. We chose this methodology because the receipt transactions that made up at least 95 percent of the total receipts population consisted mainly of a few high-dollar value transactions, and the remaining approximately 5 percent consisted of several low-dollar value transactions that were primarily interest receipts, which were not relevant to the scope of our audit. For the 3-year period we reviewed, a total of 73 individual receipt transactions were tested. Table 3 describes the total transaction count and dollar value of each population and the total number of transactions tested. To perform detailed testing on receipts, we verified that (1) the transaction amount agreed to supporting documentation, such as cooperative agreements, contracts, and deposit records; (2) the supporting documentation had evidence of approval; and (3) the transaction was accurately recorded in the appropriate year. For disbursements, using an attribute sampling method, we selected a random sample of disbursement transactions for each year under review. We chose this methodology because of the large volume of transactions in our populations and the nature of the testing we were to perform. We planned our testing to be 95 percent confident with a tolerable error rate of 7 percent. Table 4 describes the total transaction count and dollar value of each population and the total number of sampled transactions selected for testing. To perform detailed testing on disbursements, we verified that (1) the transaction amount agreed to supporting documentation, such as contracts, vendor invoices, and payment records; (2) the supporting documentation had evidence of approval; and (3) the transaction was accurately recorded in the appropriate year. For the 3 years under review, disbursements made by checks represented approximately 59 to 77 percent of our sample items, and disbursements made by electronic fund transfers represented approximately 23 to 41 percent of our sample items. Therefore, for evidence of approval, we reviewed different documentation for each payment type. For payments made by check, we obtained check images from the bank to assess whether signatures by managers authorized to sign checks were evident and in accordance with CCI’s policy. For payments made by electronic fund transfer, we verified other document sources, such as employment records for payroll transactions, and expense reports for petty cash reimbursements and credit card transactions to assess whether electronic fund transfers were approved by the appropriate manager prior to disbursement. We conducted this performance audit from December 2015 to October 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from Capital Concerts, Inc. In addition to the contact named above, Nina M. Rostro (Assistant Director), Sharon Byrd, Natasha Guerra, Brian Harechmak, Jason Kelly, and Joseph Willert made key contributions to this report.
CCI, a private nonprofit organization in the District of Columbia, has produced the annual Memorial Day and Fourth of July concerts held on the U.S. Capitol Grounds for over 25 years and over 35 years, respectively. GAO was asked to audit all concerts hosted by CCI for the past 3 years. This report examines (1) how much funding, federal and nonfederal, was received and used by CCI for the 2012 though 2014 Memorial Day and Fourth of July concerts and (2) to what extent CCI's recorded receipts and disbursements related to these concerts were supported by adequate documentation, approved by authorized management, and recorded in the appropriate year. GAO performed data reliability procedures over CCI's 2012 through 2014 receipt and disbursement transactions, classified transactions by funding sources, obtained CCI management's concurrence with the classifications, and selected transactions for each year to test key controls. Capital Concerts, Inc. (CCI) received both federal and nonfederal funding to support the production of the Memorial Day and Fourth of July concerts held on the U.S. Capitol Grounds from 2012 through 2014. CCI received federal funding through a cooperative agreement with the Department of the Interior's National Park Service (NPS) that included funding that NPS received through an interagency agreement with the Department of the Army and funding from NPS's own budget through the National Capital Area Performing Arts Program. CCI obtained nonfederal funding through corporate sponsorship agreements, grant agreements with the Corporation for Public Broadcasting, and license agreements from the Public Broadcasting Service. CCI also received funding from other sources, such as interest revenue, but these transactions represent a minor portion of total funding. CCI received funding for the 2012 through 2014 Memorial Day and Fourth of July concerts ranging from $8.5 million to $9.2 million each year. CCI disbursed from $8.3 million to $8.9 million each year to produce these concerts, approximately 1.1 to 3.7 percent less than the annual funding received. Procurement and Accounts Payable Policy has not been updated since November 2008 and does not include all internal control activities over payment approvals. Without incorporating all control activities over payment approvals into its policy and procedures, CCI increases its risk that the control activities will not be implemented consistently and that improper payments, fraud, and abuse may not be prevented or detected. GAO recommends that CCI update its existing policy to fully document CCI's management approval controls over payments made by check, including exemptions to regular procedures. The update should include approval procedures to be followed during periods when only one authorized manager is available to sign checks for payment. In commenting on a draft of this report, CCI maintained that all disbursements were properly approved but GAO disagreed as noted in the report. CCI stated it will take appropriate action to address GAO's recommendation.
Congress authorized the five-year MPNDI pilot program in Section 866 of the NDAA for Fiscal Year 2011 with the intent to test whether streamlined acquisition procedures, similar to those available for commercial items, can serve as an effective incentive for “nontraditional defense contractors” to innovate in areas useful to DOD. Congress extended authority for the pilot program through December 31, 2019 in Section 814 of the National Defense Authorization Act for Fiscal Year 2014. Section 866 defined a number of terms for the purposes of the pilot program, such as MPNDI and nontraditional defense contractor, as shown in table 1. Section 866 also required that contracts awarded under the pilot program meet a number of contract requirements as outlined in table 2. To help encourage nontraditional defense contractors to offer items to DOD under streamlined procedures, Congress exempted contracts awarded under the pilot program from the requirement to submit certified cost or pricing data and from the federal Cost Accounting Standards, two requirements that have previously been identified as increasing contractor costs or discouraging such companies from competing for federal contracts. Certified cost or pricing data, by regulation, is to be provided to the government by contractors and subcontractors, at certain threshold contract levels unless an exception applies, to support their proposed prices and to certify that the data are accurate, complete, and current. Certified cost or pricing data documentation requirements can be extensive. Cost Accounting Standards are mandatory for use by executive agencies and by contractors and subcontractors in estimating, accumulating, and reporting costs in connection with pricing and administration of, and settlement of, disputes generally concerning all negotiated prime contract and subcontract procurements with the government in excess of the thresholds for submission of certified cost or pricing data. Congress required that DOD provide information on contracts awarded under the pilot program not later than 60 days after the end of each fiscal year in which the pilot program is in effect. Each report is to include the contractor, the item or items to be acquired, the military purpose to be served by the item(s), the amount of the contract, and the actions taken to ensure that the price paid is fair and reasonable. Attracting contractors that do not traditionally pursue government contracts due to the cost and impact of complying with government procurement requirements has been a longstanding concern within the government. Congress and others have taken various steps, including creation of the MPNDI pilot program, to address these concerns. For example, in 1996 Congress established a commercial item test program to provide contracting officers with additional procedural discretion and flexibility to acquire commercial items. Commercial items and services are those generally available in the commercial marketplace in contrast with items developed to meet specific federal government requirements. Commercial items are generally exempt from the requirement to provide certified cost or pricing data or comply with cost accounting standards. Similarly, Congress provided DOD the authority to enter into “other transactions” to take advantage of technological advances made by the private sector. Other transactions are generally not subject to federal laws and regulations governing standard procurement contracts. Further, in May 2013, the Deputy Secretary of Defense asked the Defense Business Board to begin studying ways to encourage broader participation in DOD acquisitions from the private sector for the purpose of encouraging innovation. DOD reported that it has not awarded any contracts using the authority provided by the pilot program since it was initiated in 2011. As a result, the pilot program has not resulted in DOD obtaining items that otherwise might not have been available to it nor assisted DOD in the rapid acquisition and fielding of capabilities to meet urgent operational needs. Our review of input provided by the military departments and defense agencies to DPAP and our interviews with DOD program and contracting officials identified a number of factors that may be contributing to the lack of use of the pilot program, including limited awareness of the program, challenges in meeting all the criteria needed to use the program, and the ability to use other flexibilities to obtain needed items. DOD has not taken steps to address these concerns, however, which may continue to limit the future use of the pilot program. DOD initiated the pilot program in June 2011 through an interim rule to the DFARS. Under this interim rule, DOD created DFARS subpart 212.71, which generally reiterated the pilot program requirements as prescribed by Section 866. The subpart provided that a new clause, DFARS 252.212-7002, be used in all solicitations that would meet the criteria of the pilot program. The subpart also required that departments and agencies prepare a consolidated annual report to provide information about contracts awarded under the pilot program authority and submit it by October 31 of each year. The interim rule was finalized without change in January 2012. The military departments also provided varying levels of guidance that generally reiterated the pilot program rules as stated in DFARS. For example, the Navy Marine Corps Acquisition Regulation Supplement requires that contracts awarded under the pilot program during the preceding fiscal year be reported annually to the Deputy Assistant Secretary of the Navy for Acquisition and Procurement. Air Force Materiel Command restated the requirements of the pilot program in their February 2012 and April 2014 Contracting Bulletins, which are distributed to contracting personnel across the command, and also issued corresponding training slides that restated the requirements of the pilot program. In addition, during the course of our audit, the Army distributed a policy alert on the proper use of the pilot program by restating the requirements. Over the past three years, the Under Secretary of Defense for Acquisition, Technology and Logistics has reported to Congress that DOD has not awarded contracts under the authority provided by the pilot program during each of the prior fiscal years. To prepare its annual reports, DPAP requests data from each of the military departments, defense agencies, and other defense offices on all instances of use of the pilot program during the relevant fiscal year. Each DOD component is required to provide information for each contract awarded under the pilot program, including the contractor, item(s) acquired, price, military purpose served by the item(s) acquired, and steps taken to ensure fair and reasonable pricing. DPAP also requires the components to report if they have not used the pilot program during the course of the prior fiscal year. DOD’s annual reports found, and our discussions with military department and defense agency officials confirmed, that DOD has not used the authority from fiscal years 2011 to 2013. As a result, the pilot program has not resulted in DOD obtaining items that otherwise might not have been available to it nor assisted in the rapid acquisition and fielding of capabilities to meet urgent operational needs. The absence of contracts awarded under the pilot program precludes us from determining how DOD protected the government’s interests in paying fair and reasonable prices for the item(s) acquired. Our review of the input provided by the defense components, as well as our information from interviews with policy, program, and contracting officials at the 11 components we contacted, identified a number of issues that may be contributing to the lack of use of the pilot program, including limited awareness of the pilot program, challenges in meeting all the criteria required to use the pilot program, and the ability to use other flexibilities to obtain needed items. DOD is aware of a number of issues but has no ongoing efforts to address them. The following examples illustrate these issues. Limited awareness of the pilot program: In several instances, DOD officials from commands and contracting activities that we interviewed were generally unaware of the pilot program prior to our review, noting that the program had not been well publicized or could only cite its inclusion into DFARS. For example, the program officials from the Army’s Rapid Equipping Force told us that they were notified of the pilot program on October 1, 2014 as a result of our review. Similarly, program officials from the Joint Improvised Explosive Device Defeat Organization were unaware of the pilot program until we had contacted them for information. Further, the Air Force noted in its response to the fiscal year 2014 DPAP data call on the pilot program that the program had not been well publicized within the department and identified this issue as one of several reasons why the program had not been used. Challenges in meeting all the criteria required to use the pilot program: Program and contracting officials from commands and contracting activities we interviewed stated that it was difficult to identify proposed acquisitions that met all the requirements for using the pilot program. Officials from 5 of the 11 offices that we spoke with provided examples or told us that in their experience the items they acquire generally need to be modified for government use and therefore may not meet the requirement that the item was developed exclusively at private expense. For example, officials from the Army Rapid Equipping Force told us about a 2011 need to identify and field a sensor package that could measure, collect, and store data on improvised explosive device blast pressure experienced by soldiers inside and outside of vehicles. These officials noted that doing so would enable the Army to advance research and treatment on mild traumatic brain injuries. The Army determined that no existing nondevelopmental items suitably measured such forces, so they modified an existing commercial item to meet the need, which in turn was deployed to Afghanistan in June 2012. In another example, a contracting official from the Air Force Materiel Command identified a commercially-available airplane landing system that was modified by the government for military-use. In its response to the fiscal year 2014 DPAP data call, the Air Force noted that the many requirements of the pilot program that must be met, such as delivery within nine months, use of nontraditional contractors, the required use of competitive procedures, and the restriction not to exceed $50 million, limited the applicability of the program. Additionally, several DOD officials cited the requirement to use competitive procedures as a limiting factor. DPAP officials noted that Section 866 requires the use of “competitive procedures” without further definition. These officials noted that 10 U.S.C. 2302 defines competitive procedures as acquisitions conducted under full and open competition—that is, under which all responsible bidders or offerors are eligible to compete. As such, DPAP officials did not believe that the use of Section 866 allowed acquisitions to be conducted using one of the exceptions to competitive procedures, such as awarding a contract on a sole-source basis. However, some DOD officials stated that they thought the program may be more useful if exceptions to competition could be used. They noted that the ability to use exceptions to competition would make one of the key features of the pilot program—the exemption from the need to provide certified cost and pricing data—more applicable because certified cost and pricing data would generally apply to contracts that are awarded non- competitively. The ability to use other flexibilities to obtain needed items: Contracting officials from the military departments with whom we spoke identified other existing authorities—such as commercial item acquisition procedures—that they would use to acquire items that they identified as potentially covered by the pilot program. In several cases, officials provided examples of nondevelopmental items developed at private expense that they acquired through competitive commercial item acquisition procedures. As such, DOD would generally be precluded from obtaining certified cost or pricing data or from requiring the contractor to adhere to federal cost accounting standards, two benefits that the pilot program was to provide to attract commercial firms. For example, during our interview with the Naval Surface Warfare Center, contracting officials initially identified data recorders as potentially meeting the requirements of the pilot program but ultimately concluded that acquisition of these recorders would most likely be acquired as a commercial item. Further, in another example, DPAP officials told us that military purpose aviation fuel tanks were acquired as a commercial item rather than under the pilot program, because DOD determined the fuel tanks met the definition of a commercial item. As we found in our February 2014 report on DOD’s commercial item test program, DOD contracting officers have many tools in their toolkit and the decision regarding the appropriate contracting method for a commercial item is left to the contracting officers’ discretion. We found that several factors influence the contracting officer’s decision, such as the estimated value of the contract at award, the urgency of the requirement, the availability of existing contracts or contracting vehicles, as well as the nature of the item or service being acquired. GAO has issued several reports on DOD’s urgent needs processes. See, for example, GAO, Warfighter Support: DOD’s Urgent Needs Processes Need a More Comprehensive Approach and Evaluation for Potential Consolidation, GAO-11-273 (Washington, D.C.: Mar. 1, 2011); and GAO, Warfighter Support: Improvements to DOD’s Urgent Needs Processes Would Enhance Oversight and Expedite Efforts to Meet Critical Warfighter Needs, GAO-10-460 (Washington, D.C.: Apr. 30, 2010). some modifications to the design. As a result, these officials were not certain whether the items could have been acquired using the pilot program. DPAP officials noted that they are aware of many of these issues, but have no ongoing efforts to specifically address them. GAO’s prior work has identified several sound management practices when developing, implementing, and assessing pilot programs, including developing objectives that link to the goals of the pilot and ensuring the results of the In the case of the MPNDI pilot pilot are communicated to stakeholders.program, DOD has not proactively identified opportunities to use the pilot program in areas useful to DOD—a goal of the pilot—such as by identifying how the authority might help DOD attract nontraditional contractors to fill needs in specific industries, technologies, or for certain capabilities that are not met by existing authorities. The pilot program was also intended to test whether streamlined acquisition procedures, similar to those available for commercial items, can serve as an incentive for “nontraditional defense contractors” to innovate in areas useful to DOD. DOD has not determined whether the pilot program provides new flexibilities or the opportunity to use streamlined acquisition procedures that are not already available under other authorities. Lastly, DOD’s prior annual reports to Congress have not identified whether there are specific requirements under the pilot program, such as the need to award contracts competitively, that might hinder its use. Taking action to identify how the pilot program authority may assist in (1) attracting nontraditional contractors, (2) testing the use of new flexibilities or streamlined procedures, and (3) identifying and reporting to Congress on specific requirements of the pilot program that may hinder its use, could better position DOD to determine whether the pilot program provides meaningful value to the department. DOD has had a longstanding concern to better involve commercial and small business companies so that it can acquire innovative solutions to meet military requirements. Congress created and later extended the MPNDI pilot program as a way of providing additional flexibilities to assist DOD in acquiring needed items, to spur innovation and participation from nontraditional defense contractors, such as by using streamlined acquisition procedures or eliminating certain requirements that had been identified as barriers to attracting firms that traditionally did not do business with DOD. However, DOD has not yet used the program in the 3 years since it was initiated. Determining whether the pilot program provides meaningful value to the department requires that DOD do more than make the authority available for use by its personnel. In that regard, DOD has not provided assistance to its program and contracting officials to help identify opportunities to use the pilot program as currently structured, nor has it reported to Congress on issues that hinder its use, such as the requirement to use competitive procedures. Further, DOD identified a number of existing authorities that enabled them to acquire needed goods and services quickly from the private sector. Identifying whether there are targets of opportunities in terms of industries, technologies or capabilities that remain untapped, or gaps in existing authorities or procedures that could be met, or limitations in the pilot program’s current structure that hinder its use can help shape the future of the pilot program. Unless DOD takes such action, the remaining 5 years of the authority may not produce results that differ from those reported over the past 3 years. If so, DOD will have missed an opportunity to make an informed decision as to whether authority provided under the pilot program would provide value to the department. On the other hand, if DOD concludes, on the basis of a robust pilot program, that the authority does not add value, then that conclusion should stand. To maximize the potential value of the MPNDI pilot program, we recommend that the Under Secretary of Defense for Acquisition, Technology and Logistics take the following three actions: identify how this authority, as currently structured, may assist DOD in attracting nontraditional contractors in specific industries, technologies, or capabilities; identify whether there are opportunities to test flexibilities or streamlined procedures that are not otherwise available under existing authorities; and if DOD believes changes are needed to the current structure of the pilot program to increase its utility, to identify such issues in its subsequent annual reports to Congress. We provided a draft of this report to DOD for comment. In its written comments, which are reprinted in appendix II, DOD concurred with each of our recommendations. DOD stated that it found meeting all the criteria needed to use the authority, and in particular, the need to use “competitive procedures,” as limiting the department’s ability to effectively use the pilot program authority and its ability to test flexibilities or streamlined procedures not otherwise available to the department. DOD stated it would identify such issues in future reports to Congress. DOD also stated it would continue to examine how the pilot program may assist in attracting nontraditional contractors, but did not specify how it would do so. As we indicated in the report, identifying potential targets of opportunity, such as specific industries, technologies, or capabilities gaps where the program’s use may provide an additional incentive for nontraditional contractors to do business with DOD, can help shape the future of the pilot program. DOD also provided technical comments, which we incorporated in the report as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Acquisition, Technology and Logistics, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or dinapolit@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Section 866 of the National Defense Authorization Act (NDAA) for Fiscal Year 2011 mandated that GAO assess DOD’s use of the pilot program. Specifically, Section 866 mandated that GAO assess whether the pilot program (1) enabled DOD to acquire items that otherwise might not have been available to DOD; (2) assisted the department in the rapid acquisition and fielding of capabilities needed to meet urgent operational needs; and (3) protected the interests of the United States in paying fair and reasonable prices for the item or items acquired. This report addresses the extent to which DOD awarded contracts that met these goals and issues potentially affecting use of the pilot program. To determine the extent to which DOD awarded contracts under the pilot program that met these goals, we reviewed Section 866 of the NDAA and other applicable laws, the Federal Acquisition Regulation (FAR), Defense Federal Acquisition Regulation Supplement (DFARS), DOD’s annual reports to Congress on the pilot program from fiscal years 2011 to 2013 (the most recent fiscal year for which DOD submitted a report at the time of our review), DOD’s preliminary data gathered in preparation for its fiscal year 2014 report, and DOD’s implementing guidance. To test whether DOD’s annual reports accurately reflected the use of the pilot program, we requested data from the military departments (Office of the Assistant Secretary of the Air Force (Acquisition); Office of the Assistant Secretary of the Army for Acquisition, Logistics and Technology; and the Office of the Deputy Assistant Secretary of the Navy (Acquisition and Procurement)) on contracts that included the DFARS clause 252.212- 7002, Pilot Program for Acquisition of Military-Purpose Nondevelopmental Items, which is to be included on contracts awarded under the pilot program. This effort identified 105 contracts awarded from fiscal years 2011 to 2013 that included the clause. The military departments, however, subsequently determined that none of the contracts identified had used the pilot program authority, and provided us information on how they identified the contracts that included the clause, the steps they took to verify the information in their contracting systems with cognizant contracting officials, and the steps they were taking to correct these errors, including modifying the contracts to delete the clause and issuing additional guidance. Based on the actions taken by the military departments in response to our request for data, we determined that the data, as originally provided to the defense committees, of DOD’s reported use of the authority from fiscal years 2011 to 2013 were sufficiently reliable for the purposes of this report. We interviewed DOD and military department officials to determine how they implemented the pilot program, including the extent to which the pilot program enabled DOD to acquire items that otherwise might not have been available to DOD and assisted DOD in the rapid acquisition and fielding of capabilities to meet urgent operational needs. To identify issues that potentially affected the use of the pilot program, we reviewed the input provided by the military departments and defense agencies to the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics-Defense Procurement and Acquisition Policy (DPAP) to help support the preparation of the fiscal year 2013 annual report to Congress and reviewed the input to the fiscal year 2014 report that had been submitted to DPAP as of December 5, 2014. We also collected information and interviewed officials from DOD, the military departments, a command and contracting activity within each military department, and other defense organizations. Selected commands, activities, and defense organizations included the Air Force Materiel Command; Air Force Life Cycle Management Center; Army Program Executive Office for Command, Control and Communications-Tactical; Army Program Manager Tactical Radios; Naval Sea Systems Command; Naval Surface Warfare Center-Port Hueneme Division; Army Rapid Equipping Force; the Joint Improvised Explosive Device Defeat Organization; the Joint Rapid Acquisition Cell; Special Operations Command-Special Operations Research, Development, and Acquisition Center; and Central Command-Joint Theatre Support Contracting Command. These 11 components were selected based on various factors, including potential use of the pilot program, knowledge of the pilot program, and fulfillment of urgent operational needs. Further, we collected information and met with officials from Department of the Navy- Office of Small Business Programs and the Program Executive Office for Simulation, Training and Instrumentation. We also met with representatives from an industry group to gather their views on the pilot program. Section 866 of the NDAA also required that we assess the extent to which the pilot program protected the interests of the U.S. in paying fair and reasonable prices for the item(s) acquired, but we determined that there was not sufficient information available to make such an assessment. To help determine whether DOD followed sound management practices when developing, implementing and evaluating the pilot program, we used GAO’s prior work on pilot programs as criteria. These practices include developing objectives that link to the goals of the pilot and ensuring the results of the pilot are communicated to stakeholders. We conducted this performance audit from September 2014 to January 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Janet McKelvey (Assistant Director), James Kim, Dina Shorafa, Marie Ahearn, Virginia Chanley, Julia Kennon, Pete Anderson, and Cary Russell made key contributions to this report.
Section 866 of the Ike Skelton National Defense Authorization Act for Fiscal Year 2011 established a pilot program authorizing DOD to award contracts for MPNDI to nontraditional defense contractors—companies that had not contracted with DOD for at least a year. The pilot program was designed to streamline acquisition procedures and to serve as an incentive for nontraditional defense contractors to innovate in areas useful to DOD. Section 866 mandated that GAO assess DOD's use of the pilot program to acquire items that otherwise might not have been available to DOD, assisted in meeting urgent operational needs, and protected the interests of the U.S. in paying fair and reasonable prices. This report addresses the extent to which DOD awarded contracts that met these goals and issues potentially affecting use of the pilot program. To conduct this work, GAO reviewed applicable laws, the Federal Acquisition Regulation, the Defense Federal Acquisition Regulation Supplement, DOD's annual reports to Congress on the pilot program from fiscal years 2011 to 2013, and DOD's implementing guidance. GAO collected information from DOD, the military departments, and selected defense organizations. Since the Department of Defense (DOD) implemented a pilot program in 2011 to award contracts for military purpose nondevelopmental items (MPNDI), it has not awarded any contracts using the authority. An MPNDI is generally an item that meets a validated military requirement and has been developed exclusively at private expense. GAO's analysis identified a number of issues that may be contributing to the lack of use of the pilot program, including the following: Limited awareness of the pilot program : In several instances, DOD officials from commands and contracting activities that GAO interviewed were unaware of the pilot program prior to GAO's review. Further, the Air Force noted that the program had not been well publicized within the department. Challenges in meeting all the criteria required to use the pilot program : DOD program and contracting officials that GAO contacted stated that it was difficult to identify proposed acquisitions that could meet all the criteria for using the pilot program, which include that the items must be developed at private expense, the initial lot of items be delivered within nine months after contract award, contractors be nontraditional defense contractors, competitive procedures be used, and contracts are $50 million or less. The ability to use other flexibilities to obtain needed items: Contracting officials from the military departments with whom GAO spoke identified other existing authorities—such as commercial item acquisition procedures—that they would use to acquire items they identified as potentially covered by the pilot program. DOD officials told GAO that they were aware of these issues but had no ongoing efforts to address them. GAO's prior work has identified several sound management practices to effectively implement or assess pilot programs, including developing objectives that link to the goals of the pilot and ensuring the results of the pilot are communicated to stakeholders. In the case of the MPNDI pilot program, DOD has not proactively identified opportunities to use the pilot program in areas useful to DOD—a goal of the pilot—such as by identifying specific industries, technologies or capability gaps where its use may provide an additional incentive for nontraditional defense contractors to do business with DOD. Additionally, DOD has not determined whether the pilot program provides new flexibilities or the opportunity to use streamlined acquisition procedures that are not already available under other authorities. Lastly, DOD's annual reports to Congress have not identified whether there are specific requirements under the pilot program, such as the need to award contracts competitively, that might hinder its use. Determining whether the pilot program provides meaningful value to the department requires that DOD do more than make the authority available for use by its personnel. Unless DOD takes action to identify opportunities to use the authority and report on issues hindering its use, DOD may miss an opportunity to make an informed decision as to whether the authority provided under the pilot program would provide value to the department. GAO recommends that DOD identify how the pilot program can help DOD attract nontraditional contractors, to test flexibilities or streamlined procedures not otherwise available under existing authorities, and include issues hindering its use in its annual reports to Congress. DOD concurred with GAO's recommendations.
Generally, the term counterfeit refers to instances in which the identity or pedigree of a product is knowingly misrepresented by individuals or companies. Counterfeiters often try to take advantage of the established worth of the imitated product, and the counterfeit product may not work as well as the genuine article. The threat of counterfeit parts continues to grow as counterfeiters have developed more sophisticated capabilities to replicate parts and gain access to scrap materials that were thought to have been destroyed. Counterfeiters exist across industries and are able to respond to changes in market conditions. Counterfeit parts can be quickly distributed in online markets. Almost every industry can be affected by counterfeit parts. Counterfeiting can affect the safety, operational readiness, costs, and the critical nature of the military mission. DOD procures millions of parts through its logistics support providers—DLA supply centers, military service depots, and defense contractors—who are responsible for ensuring the reliability of the DOD parts they procure. As they draw from a large network of suppliers in an increasingly global supply chain, there can be limited visibility into these sources and greater risk of procuring counterfeit parts. Also, as DOD weapon systems age, products required to support it may no longer be available from the original manufacturers or through franchised or authorized suppliers but could be available from independent distributors, brokers, or aftermarket manufacturers. Parts and components bought by DOD can come from different types of suppliers, as shown in table 1. DOD lacks a departmentwide definition of the term counterfeit. In our discussions with DOD logistics and program officials, several told us they are uncertain how to define counterfeit parts, and many officials also stated that a common definition would be useful. In the absence of a departmentwide definition of counterfeit parts, some DOD entities have developed their own. Although there are similarities among these definitions, the scope varies. For example, one DLA supply center defined a part as counterfeit only when it misrepresented the part’s trademark. In contrast, a different DLA supply center defined counterfeit parts more broadly to include misrepresentations of a part’s quality and performance. In August 2009, DOD endorsed an aerospace standard created by SAE International that includes a definition of the term counterfeit part. While this standard is available departmentwide, it is left to the discretion of each DOD program as to whether it wants to use the standard. Some DOD officials who support aviation programs, such as the F-15, told us they were using or considering use of the standard, while other DOD officials told us they were unaware of it. Others were uncertain how it would apply beyond avionics to components like fasteners, uniforms, tires, and brake pads. In some cases, officials stated the definition is too broad for their use. The two primary databases DOD uses to report deficient parts—the Product Data Reporting and Evaluation Program (PDREP) and the Joint Deficiency Reporting System (JDRS)—have data fields that enable users primarily to track information on deficient parts, but neither is designed specifically to track counterfeit parts. DOD considers products that do not conform to quality or design specifications to be deficient. Both of these systems allow users to enter a cause code for why a part is deficient, but neither database has a code to capture the deficiency as counterfeit. As a result, users are limited to reporting a suspected counterfeit part in narrative descriptions. However, identifying instances of counterfeit parts through searches of narrative descriptions is difficult due to a lack of common terminology. For example, an Air Force official told us that when he searched the JDRS system, he found 3 out of more than 94,000 entries that discussed counterfeit parts. We performed similar searches and found that the terms associated with counterfeit are rarely included in narrative fields. In consultation with database managers from both PDREP and JDRS, we developed a list of 11 terms associated with counterfeit parts and searched the systems’ narrative fields for these terms over a 5-year period ranging from October 1, 2004, to September 30, 2009. We found that less than 1 percent of the reports in the databases included one of our search terms, and a manual review of these cases determined that only a few were relevant to counterfeit parts. DOD entities also have access to the Government Industry Data Exchange Program (GIDEP)—a Web-based database—that allows government and industry participants to share information on deficient parts, including counterfeit. Specifically, a GIDEP user can submit information on a suspected counterfeit part and GIDEP policy allows for up to 15 days for the supplier to respond before posting this information to the database. A 1991 Office of Management and Budget policy letter instructs government agencies to use GIDEP to report deficient parts. However, the GIDEP Deputy Program Manager told us that GIDEP is not widely used to report suspect counterfeits. He stated that the policy letter was intended as a short-term requirement for government use of GIDEP until a Federal Acquisition Regulation change was made, which never occurred. He further stated that DOD had previously issued a military standard requiring use of GIDEP, which was canceled during acquisition reform in 1996. DOD logistical support providers and contractors that we spoke with cited concerns with using the GIDEP system such as delayed reporting, liability issues, and effect on criminal investigations. Delayed Reporting: A 15-day delay in posting reports to the system allows suppliers to investigate and respond to reports concerning their products. However, during this time, a counterfeit part could continue to be used or purchased. Liability Issues: Some officials expressed concerns about the legal implications of reporting a part as suspect counterfeit before it had been proven. Fear of lawsuits was repeatedly cited as a reason cases are not reported to GIDEP. Effect on Investigations: Another concern officials raised about reporting cases to GIDEP is the possibility of alerting suppliers to active investigations, as investigators may want to monitor a supplier’s activities to gather further evidence of possible illegal activity. In the absence of data collected on counterfeit parts, we visited military services, MDA, DLA, selected defense contractors, and suppliers; many of these officials provided specific examples of counterfeit or suspect counterfeit parts. As definitions of “counterfeit” vary within DOD, they generally refer to instances in which individuals or companies knowingly misrepresent the identity or pedigree of a part. Specific examples of the types of counterfeits encountered by DOD include parts falsely claimed by the supplier to be from a particular parts that deliberately do not contain the proper internal components or construction consistent with the ordered part, authentic parts whose age or treatment have been knowingly parts with fake packaging. We met with DOD program officials and logistical support providers across 16 DOD programs and three DLA supply centers and discussed instances of suspect and confirmed counterfeit parts; examples are shown in appendix II. About two-thirds of these instances involved fasteners or electronic parts while the remainder included materials ranging from titanium used in aircraft engine mounts to Kevlar used in body armor plates. The following illustrates the examples of counterfeit parts and actions taken provided by officials across DOD. Seatbelt clasps: Seatbelt parts were made from a grade of aluminum that was inferior to that specified in DOD’s requirements. The parts were found to be deficient when the seatbelts were accidentally dropped and they broke. Routers: The Navy, as well as other DOD and government agencies, purchased counterfeit network components—including routers—that had high failure rates and the potential to shut down entire networks. A 2-year FBI criminal investigation led to 10 convictions and $1.7 million in restitution. Microprocessor: The Air Force needed microprocessors that were no longer produced by the original manufacturer for its F-15 flight-control computer. These microprocessors were procured from a broker and F- 15 technicians noticed additional markings on the microprocessor and character spacing inconsistent with the original part. A total of four counterfeit microprocessors were found and as a result were not installed on the F-15’s operational flight control computers. Global Positioning System: Oscillators used for navigation on over 4,000 Air Force and Navy systems experienced a high failure rate and failed a retest. These oscillators were provided by a supplier that Global Positioning System engineers had previously disapproved as a supply source. Air Force officials stated that while the failure would not cause a safety-of-flight issue, it could prevent some unmanned systems from returning from their missions. Operational Amplifiers: A counterfeit operational amplifier, which can be used on multiple MDA systems, was identified on MDA hardware during testing. The failed part was found on a circuit board supplied by a subcontractor. It was later determined that the subcontractor purchased these parts from a parts broker who was not authorized to distribute parts by the original component manufacturer. To date, all parts have been accounted for and secured from further use on any other products. Microcircuits: A counterfeit microcircuit, which can be used on multiple MDA systems, was identified on MDA hardware. MDA’s visual inspection showed that the part was resurfaced and remarked, which prompted authenticity testing. Tests revealed surface scratches, inconsistencies in the part marking, and evidence of tampering. These parts were purchased from a parts broker who was not authorized to distribute parts by the original component manufacturer. Packaging and small parts: During a 2-year period, a supplier and three coconspirators packaged hundreds of commercial items from hardware and consumer electronics stores and labeled them as military-grade items. For example, the supplier placed a rubber washer from a local hardware store in a package labeled as a brass washer for use on a submarine. The supplier also labeled the package containing a circuit from a personal computer as a $7,000 circuit for a missile guidance system. The suppliers avoided detection by labeling packages to appear authentic, even though they contained the wrong part. The supplier received $3 million from contracts totaling $8 million before fleeing the country. He has been extradited to the United States and awaits trial; his coconspirators have been convicted. The Department of Commerce also identified the existence of counterfeit parts in DOD’s supply chain in a study released in January 2010. This study, sponsored by Naval Air Systems Command, was designed to provide statistics on the extent of infiltration of counterfeit electronic components into the United States industrial and supply chains, to understand how different segments of the supply chain currently address the issue, and to gather best practices from the supply chain on how to handle counterfeits. The department received completed surveys from 387 respondents representing five segments in the U.S. supply chain—OCMs, distributors and brokers, circuit-board assemblers, prime contractors and subcontractors, and DOD entities. The surveys included questions addressing past experiences with counterfeit parts and practices used in identifying them. While the study did not provide a number for the total counterfeit incidents at DOD, it noted that 14 DOD organizations had reported incidents of counterfeit parts. The study’s survey respondents identified a growth in incidents of counterfeit parts across the electronics industry from about 3,300 in 2005 to over 8,000 incidents in 2008. Survey respondents attributed this growth to a number of factors, such as a growth in the number of counterfeit parts, better detection methods, and improved tracking of counterfeit incidents. In April 2009 DOD formed a departmentwide team—partially in response to media reports that highlighted the existence of counterfeit parts in the DOD supply chain—to collect information and recommend actions to mitigate the risk of counterfeit parts in its supply chain. Standing participants include representatives from DOD’s Office of the Under Secretary of Defense for Acquisition, Technology & Logistics, DLA, the Defense Contract Management Agency, the Defense Standardization Program Office, MDA, and military law enforcement and investigative agencies. The team also incorporates liaisons from groups such as the defense industry, Defense Intelligence Agency, Federal Aviation Administration, National Aeronautics and Space Administration, Department of Energy, Department of Commerce, and state and federal law enforcement organizations. To gather preliminary information on the counterfeit problem in DOD, the team has visited three DOD facilities to observe operations and discuss occurrences of and problems with counterfeit in the supply chain. The team plans to complete a review of current DOD processes and procedures for the handling and storage, detection, disposal, and reporting of counterfeit parts by July 2010. The team then plans to assess the policies, procedures, and metrics needed to address the issue of counterfeit parts . Additionally, the team is developing training materials that it plans to make available through the Defense Acquisition University, to increase the general awareness of counterfeit parts and plans to develop additional training on detection techniques. DOD relies on existing procurement and quality control practices to ensure the quality of the parts in its supply chain. However, these practices are not designed to specifically address counterfeit parts. Limitations in the areas of obtaining supplier visibility, investigating part deficiencies, and reporting and disposal may reduce DOD’s ability to mitigate risks posed by counterfeit parts. Obtaining supplier visibility: DOD and its prime contractors rely on suppliers across a global supply chain for parts and materials. Federal acquisition regulations require that agency contracting officers consider whether a supplier is responsible before awarding a contract and note that the award of a contract to a supplier based on the lowest price alone can result in additional costs if there is subsequent default, late deliveries, or other unsatisfactory performance. While cost or price is always a consideration when purchasing goods, an abnormally low price, especially from an unfamiliar source, can be an indication that there is a need to assess the supplier’s ability to meet the requirements of the contract. For example, a DLA contracting official described an instance in which a supplier new to DLA was awarded a contract based on a low price and a performance score of 100 percent. However, the score was misleading as the supplier had no past performance to measure. Ultimately, the supplier was unable to meet the requirements of the contract. Further, DOD parts can be purchased through the use of automated systems that have limited visibility on suppliers and can increase the risk of purchasing counterfeit parts. To address the risks of using automated source selection, DLA has a pilot project to create a list of qualified distributors for the supply of two electronic items—semiconductors and microcircuits. Of the 53 distributors that applied, 13 were selected based on their qualifications. DLA plans to review other parts to determine if the pilot can be expanded. In addition, DOD has a number of weapons systems that have remained in service longer than expected—such as the B-52 bomber—and require parts that are no longer available from the original manufacturer or its authorized distributors. When parts are needed for these systems, they are often provided by brokers or independent distributors. As buying from these sources reduces DOD’s visibility into a part’s pedigree, additional steps are required in assuring that the part is reliable or authentic. Detecting Part Deficiencies: DOD can have a part’s quality and authenticity tested through destructive and nondestructive methods prior to awarding a contract. However, several DOD officials told us that staff responsible for assembling and repairing systems and equipment may not have the expertise to identify suspect counterfeit parts outside of those that demonstrate performance failures because they are not trained to identify counterfeit parts and have limited awareness of the issue. In addition, DOD contracting officials told us that the cost and time associated with testing may be prohibitive, especially for lower-cost parts such as a 50-cent fastener. Other factors were cited by DOD officials at several testing centers as limitations such as the barriers to testing parts that are only available in limited quantities or are expensive. For instance, the F-15 program was in need of two spare parts, but only two of these parts were available in the supply chain, so the preferred destructive testing could not be performed. Reporting and disposal: Generally, DOD has processes in place for reporting and disposal of deficient parts. Reporting of a deficient part that is suspected to be counterfeit enables further investigation to confirm that a part is counterfeit. As described above, DOD uses JDRS and PDREP to report deficient parts, but does not have a specific field in these databases to report counterfeit parts. Some DOD officials stated that they report suspect counterfeits to internal fraud teams, others indicated that they would contact local law enforcement or the Federal Bureau of Investigation in similar cases. DOD officials told us that when they found counterfeit parts they have shared this information through informal methods such as e-mails or phone calls. Others, such as MDA, use formal methods to convey this information such as bulletins that alert MDA staff of counterfeiting techniques and how to detect them as well as advisories on confirmed counterfeit parts found in MDA programs. MDA officials stated that these methods are an effective way to immediately alert their staff of counterfeit parts. Further, depending on the condition of a noncounterfeit, deficient part and its related demilitarization code, it can be refurbished, resold, or destroyed. The disposal of counterfeit and scrapped parts is an area of vulnerability as they could reenter the supply chain. According to officials from the Defense Reutilization and Marketing Service—the agency responsible for destroying and disposing of DOD’s excess and surplus parts—it is critical that a part and its related demilitarization code be identified as counterfeit when it is sent for disposal to prevent it from reentering DOD’s supply chain. However, DOD does not have a consistent method to identify parts as counterfeit when they are sent for disposal. Some parts designated for disposal have made their way back into the supply chain. For example, DOD program officials described a helicopter part that had the same serial number as a defective one that had been destroyed. An X-ray test revealed the destroyed part had been welded back together and put back in DOD’s inventory. In the absence of a departmentwide policy, some DOD components and their contractors have supplemented existing procurement and quality- control practices to help mitigate the risk of counterfeit parts in the DOD supply chain. For example, MDA has established a 12-person organization that leverages subject-matter expertise at two DOD laboratories to identify, evaluate, and track the effects of counterfeit parts on all MDA hardware. MDA policies to address counterfeits are part of its Parts, Materials, and Processes Mission Assurance Plan which includes instructions on part selection, procurement, receipt, testing, and use of parts. This plan specifically identifies three steps to offset the presence of counterfeit parts and materials in the market: (1) preventing counterfeit parts and materials by using only authorized distributors, with associated certifying paperwork; (2) detecting and containing counterfeit parts and materials through appropriate inspection and test methods; and (3) notifying the user community of potential counterfeit concerns and assisting in prosecution. The plan also instructs programs to impound suspect counterfeit parts and all items from the same lot and to not return suspected counterfeit parts to suppliers, preventing them from being sold to others. According to MDA officials, all new contracts include adherence to the plan’s section on counterfeit parts and materials, and MDA has developed policies that can be applied to existing contracts. MDA further has applied DOD’s item-unique identification technology that provides for the marking of individual items—whose unit acquisition cost is $5,000 or more—with a set of globally unique data elements. This technology is designed to help DOD value and track items throughout their life cycle by requiring equipment manufacturers to assign unique identification numbers to parts acquired under DOD contracts, thus enabling better traceability of a part to a specific manufacturer. MDA also has an ongoing effort to develop tools to identify, quantify, and manage the risk of counterfeit parts in the supply chain as counterfeits or suspect counterfeits are detected. DLA’s Supply Center in Columbus, Ohio, has an established team that investigates suspect counterfeit parts under the broader scope of fraud. The team is composed of members from DLA’s product verification, contracting, and legal offices as well as the Defense Criminal Investigative Service and handles cases ranging from part deficiencies to contractor misconduct. When encountering a counterfeit part, the team’s analysis of engineering investigations, product testing, and criminal investigations can be used as evidence in criminal and civil cases. DOD’s prime contractors are also independently taking steps to protect the supply chain from counterfeits. As DOD relies on its suppliers to provide weapons, equipment, and raw materials to meet U.S. national security objectives, these activities directly affect DOD’s own efforts. Several prime contractors told us that they are using a recently adopted industry standard to develop counterfeit protection plans. The standard provides strategies to mitigate the risks of procuring counterfeit products and standardizes practices to maximize availability of authentic parts and procure parts from reliable sources. Additionally, it standardizes practices to assure the authenticity of parts, control parts that are identified as counterfeit, and report counterfeit parts to other potential users and government investigative authorities. Prime contractors using this standard are also focusing on ensuring traceability within their supply chains through flow-down requirements to subcontractors. For example, one contractor includes a clause in its contracts that states that its suppliers shall ensure that they do not deliver counterfeits but if this occurs, the supplier would immediately notify the defense contractor and assume responsibility for the cost of replacing the counterfeit parts. Several of the companies also provide training on detecting counterfeits within their product lines. As supply chains across industries are also vulnerable to the risk of counterfeit parts, we met with selected companies representing commercial aerospace, electronics, and automotive sectors that have taken measures to address the counterfeiting challenges they face. Companies we met with cited procedures and practices that they have incorporated to help mitigate the risk of counterfeit parts in the areas of supplier visibility, detection, and reporting and disposal. Supplier Visibility: To ensure that parts and materials are reliable, commercial companies we met with described several practices to identify potential sources of counterfeiting activity. These practices include regular assessments of a supplier’s internal controls ranging from their access to product designs to manufacturing facility security. Some practices also included instituting extra measures when purchasing from independent distributors such as internal and external validation and testing requirements, and part-authenticity documentation—such as certificates of conformance. Detection of Counterfeits: Companies we spoke with are using a number of practices to make their products and packaging more difficult to replicate and to increase the opportunities to identify counterfeits in their supply chains. Some companies incorporate rare, proprietary, or expensive materials on parts and packaging, which can deter counterfeiters. Some companies also include markings on products and packaging that, when absent or altered, could alert investigators or consumers to potential counterfeits. One company allows customers to report suspected counterfeits on its Web site and posts pictures of markings and security features for customers and investigators to use in distinguishing genuine from counterfeit products. Companies have also coordinated with the Department of Homeland Security’s Customs and Border Protection inspectors to identify counterfeits. One company visited inspectors at two ports that receive a high volume of imports for this company, to inform inspectors of product packaging characteristics and how to easily identify counterfeit packaging. This effort resulted in an increased number of seizures of suspected counterfeit products at these two ports. Reporting and Disposal of Counterfeits: Several company officials identified the lack of oversight of the scrapping, recycling, and disposal of parts as an avoidable source of counterfeiting. Specific practices that companies use to confirm that scrapped, excess, and suspected counterfeit materials are not used to make more counterfeit parts include requiring suspect counterfeits to be quarantined upon detection, auditing suppliers to ensure proper tracking of the amount of scrapped material destroyed, requiring suppliers to use contract clauses that prevent the resale of scrap parts to third parties, and witnessing the destruction of seized or returned counterfeit parts. Several industry associations identify and share counterfeit-mitigation practices. Activities include training, knowledge exchange, and developing standards. These associations can provide a forum for a diverse set of participants to arrive at agreement on collaborative mitigation steps for the counterfeit issue. The recently issued Department of Commerce report on the existence of counterfeit electronics across the industry has also recommended mitigation strategies for counterfeit parts. In April 2009, SAE International issued Aerospace Standard 5553, “Counterfeit Electronic Parts; Avoidance, Detection, Mitigation and Disposition.” The standard was created to provide uniform requirements, practices, and methods to mitigate the risks of receiving and installing counterfeit electronic parts. It also provides guidance for establishing a counterfeit-control plan to include parts availability, purchasing process, product verification, investigation, reporting, and disposal. SAE International is providing training on applying this standard, including a segment on detection and visual inspection of actual counterfeit parts. For example, in its visual inspection segment, the SAE training notes that characteristics of a part that may indicate it is counterfeit include inconsistencies in the part’s texture, colors, material, or condition; quality of ink or laser markings; condition of part labels; and markings that include information such as production dates and manufacturing locations. As shown in figure 1, visual inspection of a part’s texture can uncover counterfeits that have been resurfaced. In 2009, a number of conferences were held to facilitate a collaborative dialogue between industry representatives, law enforcement, and government agencies. Specifically, in September, DOD’s Defense Standardization Program Office sponsored its annual Diminishing Manufacturing Sources and Material Shortages and Standardization Conference where participants discussed the counterfeit part issue and how to increase awareness across industries. Additionally, in December, the Center for Advanced Life Cycle Engineering hosted its third annual symposium on avoiding, detecting, and preventing counterfeit electronic parts. Sessions at the symposium were aimed at generating awareness of the counterfeit parts issue and sharing the perspectives of law enforcement, supply chain managers, and government. The symposium also provided information on technical tools and methods to detect and prevent counterfeit parts. In late 2008, the Aerospace Industries Association established an integrated project team across aerospace, space, and defense products to address challenges in the supply chain for mitigating the risk of counterfeit parts. The team worked with government agencies, original manufacturers, industry associations, and independent distributors across three main objectives to: (1) discuss U.S. government acquisition and procurement policies to avoid introducing counterfeit parts and materials into products; (2) create a set of recommendations for government and industry to ensure that the risk of introducing counterfeit parts and materials is minimized, is consistent with risks accepted by the customer, and implementable without sacrificing the benefits of buying commercially available products; and (3) engage the U.S. government in discussions concerning enforcement of policies to avoid the introduction of counterfeit products into the United States. The project team has provided its recommendations to its association members and expects final recommendations to be available in the fall of 2010. The Semiconductor Industry Association established an Anti- Counterfeiting Task Force in June 2006, which aims to stop counterfeit semiconductors from entering the marketplace. According to the task force Chairman, its work with U.S. Customs and Border Protection led to the seizure of 1.6 million counterfeit semiconductors over the past 2 years. Other industry associations are also focusing their efforts on mitigating the risk of counterfeit parts. Business Action to Stop Counterfeiting and Piracy has developed a clearinghouse for information about counterfeiting and piracy to facilitate information exchange. The Electronic Industry Citizenship Coalition developed a risk-assessment tool for technology- industry companies to help determine the appropriate level of intensity of supplier audits and also asks suppliers about how they manage their subtier suppliers. The International Anti-Counterfeiting Coalition has helped the auto industry bring 10 global manufacturers together to discuss common global counterfeiting problems, and also provides opportunities to its members to participate in training programs. The recent Department of Commerce report provided practices for managing electronic counterfeits industrywide, as well as recommendations for the U.S. government to mitigate the risk of electronic counterfeit parts. The practices for managing counterfeits included (1) provide clear, written guidance to employees on what steps to take if they suspect a part is counterfeit, (2) remove and quarantine suspected and confirmed parts from regular inventory, (3) maintain an internal database to track all suspected and confirmed counterfeit components, and (4) report suspected and confirmed counterfeit parts to industry associations and databases and to law enforcement. The department’s report also stated that there is little information collected on malfunctioning and nonoperational electronic parts, which gives a false impression of supply-chain security. According to the report’s findings, personnel that use parts need to file Product Quality Deficiency Reports in a timely manner to report nonworking electronic components, and if this proves to be impractical for the field units, then another system of reporting needs to be developed to facilitate information sharing. Based on its survey responses, interviews, and field visits, the Department of Commerce made seven recommendations in the areas of reporting, contract award, legal guidance, enforcement activities, data collection, information sharing, and DOD acquisition planning. As DOD draws from a large network of suppliers in an increasingly global supply chain, there can be limited visibility into these sources and greater risk of procuring counterfeit parts, which have the potential to threaten the reliability of DOD’s weapon systems and the success of its missions. DOD needs a departmentwide definition and consistently used means for detecting, reporting, and disposing of counterfeit parts. Collaboration with government agencies, industry associations, and commercial-sector companies that produce items similar to those used by DOD and have reported taking actions to mitigate the risks of counterfeit parts in their supply chains offers DOD the opportunity to leverage ongoing and planned initiatives in this area. Some of these initiatives, such as MDA practices and industry detection and disposal processes, can be considered for DOD’s immediate use. However, as DOD collects data and acquires knowledge about the nature and extent of counterfeit parts in its supply chain, additional actions may be needed to help better focus its risk- mitigation strategies. We recommend that the Secretary of Defense take the following three actions as DOD develops its anticounterfeit program: 1. leverage existing anticounterfeiting initiatives and practices currently used by DOD components and industry to establish guidance that includes a consistent and clear definition of counterfeit parts and consistent practices for preventing, detecting, reporting, and disposing of counterfeit parts; 2. disseminate this guidance to all DOD components and defense 3. analyze the knowledge and data collected to best target and refine counterfeit-part risk-mitigation strategies. In written comments on a draft of this report, DOD concurred with the recommendations and identified a number of actions that it will take to address them. DOD noted that it has established teams that will leverage anticounterfeit initiatives and practices used by DOD components and industry to develop guidance by late 2010. DOD plans to include a consistent and clear definition of counterfeit parts and consistent practices for preventing, detecting, reporting, and disposing of counterfeit parts in its guidance, and plans to disseminate it to all of its components and defense contractors by early 2011. As it collects more knowledge and data on counterfeit parts, DOD plans to analyze this to best target and refine risk-mitigation strategies—which it expects to do by October 2010. According to the official leading DOD’s counterfeit parts efforts, DOD will continue to refine risk-mitigation strategies on an ongoing basis as it gains more knowledge on counterfeit parts. DOD also provided technical comments, which were incorporated as appropriate. DOD’s comments are reprinted in appendix III. The Department of Commerce concurred with the findings in this report. The Department of Commerce’s comments are reprinted in appendix IV. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; the Secretary of Commerce; the Administrator of the Office of Federal Procurement Policy; as well as other interested parties. In addition, the report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4906. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To examine the extent of the Department of Defense’s (DOD) knowledge of counterfeit parts that have entered its supply chain, we reviewed regulations, guidelines, and databases to determine whether they addressed how DOD should define and collect data on counterfeit parts. We met with officials from the DOD Acquisition and Technology, Logistics and Material Readiness, Supply Chain Integration office; the DOD Defense Logistics Agency and its Supply Centers located in Columbus, Ohio; Philadelphia, Pennsylvania; and Richmond, Virginia; the Army, Navy, Air Force, and Missile Defense Agency; and five defense prime contractors— BAE, Boeing, Lockheed Martin, Northrop Grumman, and Raytheon—to discuss (1) their definition of the term counterfeit,(2) their procedures and practices for obtaining knowledge of counterfeit parts, (3) databases available for documenting instances of counterfeit or suspect counterfeit parts, (4) their knowledge of the existence of counterfeit parts, and (5) instances of counterfeit parts within the DOD supply chain. We also met with database managers from the Joint Deficiency Reporting System (JDRS), the Product Data Reporting and Evaluation Program (PDREP), and the Government Industry Data Exchange Program (GIDEP) to discuss whether these databases are able to and have been used to document instances of counterfeit or suspected counterfeit parts. Additionally, we met with officials from the Department of Commerce, Bureau of Industry and Security’s Office of Technology Evaluation, to discuss their study of counterfeit electronics, which the office performed for the Navy, through the office’s authority to conduct surveys and analyses and prepare reports on specific sectors of the U.S. defense supplier base. To further examine the processes that DOD has in place to detect and prevent counterfeit parts from entering its supply chain, we conducted a case study of DOD weapon programs and interviewed program officials as well as several logistics support providers. We selected a nongeneralizable sample of 16 DOD weapon programs based on criteria including representation of the aerospace, ground vehicle, or missile defense sectors; representation of the production and deployment or operations and support phase of the acquisition life cycle, and cross-representation of DOD components—Army, Navy, Air Force, and the Missile Defense Agency. GAO also has ongoing work through its annual “Assessments of Selected Weapon Programs” for many of these programs, which allowed the team to build upon our prior work efforts and existing DOD contacts. Programs selected were: F-15 Eagle, F-16 Fighting Falcon, F/A-18E/F Super Hornet, F/A-22 Raptor, C-5 Galaxy, C-130 Hercules, AH-64D Apache, UH-60 Black Hawk, E-2 Hawkeye, AV-8B Harrier, SH-60 Sea Hawk, V-22 Osprey, Aegis Cruiser, Ground-Based Midcourse Defense, High Mobility Multi-purpose Wheeled Vehicles (HMMWV), and M1 Abrams. We identified initiatives and practices used by industry associations and commercial companies in selected commercial supply chains (electronics, automotive, aviation) to mitigate the risk of procuring counterfeit parts. We selected commercial supply chains and companies in those supply chains based on one or more of several criteria: industries in which instances of counterfeiting have taken place; companies that make products similar to DOD weapons systems in terms of complexity; and companies that make or buy products similar to those bought by DOD. We met with company officials from functions including Quality, Legal, Security, Brand Protection, and Sourcing and Supplier Management, to discuss their experiences with counterfeits (both incoming parts and counterfeit versions of their products) and processes in place to protect against counterfeits. Much of the information we obtained from these companies is anecdotal, due to the proprietary nature of the data that could affect the companies’ competitive standing or level of protection against counterfeits. We visited or spoke with company officials at companies and locations including Advanced Micro Devices, Sunnyvale, California; Boeing Commercial Airplanes, Everett, Washington; Cisco Systems, Inc., San Jose, California; Federal-Mogul Corporation, Southfield, Michigan; Ford Motor Company, Dearborn, Michigan; Hewlett-Packard Company, Houston, Texas; Intel Corporation, Santa Clara, California; Meggitt Aircraft Braking Systems, Akron, Ohio; Microsoft Corporation, Redmond, Washington; and Rolls-Royce Corporation, Indianapolis, Indiana. We also met with or obtained documents from several industry associations, including the Aerospace Industries Association, Semiconductor Industry Association, Business Action to Stop Counterfeiting and Piracy, Electronic Industry Citizenship Coalition, and International Anti-Counterfeiting Coalition. We attended two counterfeit- mitigation conferences—one sponsored by DOD’s Defense Standardization Program Office and the other sponsored by the Center for Advanced Life Cycle Engineering—and attended an SAE International training workshop on Aerospace Standard AS5553. We conducted this performance audit from January 2009 to March 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As shown in table 2, Department of Defense (DOD) officials that we met with provided examples of counterfeit parts. As definitions of “counterfeit” vary within DOD, the examples are based on the individual’s understanding of the term; however, the examples generally refer to instances in which individuals or companies knowingly misrepresent the identity or pedigree of a part. While many of the examples are confirmed cases of counterfeit, some include cases that were not yet confirmed as the case was under investigation or the DOD official did not know the outcome. In addition to the individual named above, key contributors to this report were Anne-Marie Fennell, Director; John Neumann, Assistant Director; Lisa Gardner; Kevin Heinz; Robert Bullock; MacKenzie Cooper; Jonathan Mulcare; Josie Sigl; Sylvia Schatz; and Jean McSween.
Counterfeit parts--generally those whose sources knowingly misrepresent the parts' identity or pedigree--have the potential to seriously disrupt the Department of Defense (DOD) supply chain, delay missions, and affect the integrity of weapon systems. Almost anything is at risk of being counterfeited, from fasteners used on aircraft to electronics used on missile guidance systems. Further, there can be many sources of counterfeit parts as DOD draws from a large network of global suppliers. Based on a congressional request, GAO examined (1) DOD's knowledge of counterfeit parts in its supply chain, (2) DOD processes to detect and prevent counterfeit parts, and (3) commercial initiatives to mitigate the risk of counterfeit parts. GAO's findings are based on an examination of DOD regulations, guidance, and databases used to track deficient parts, as well as a Department of Commerce study on counterfeit parts; interviews with Commerce, DOD, and commercial-sector officials at selected locations; and a review of planned and existing efforts for counterfeit-part mitigation. DOD is limited in its ability to determine the extent to which counterfeit parts exist in its supply chain because it does not have a departmentwide definition of the term "counterfeit" and a consistent means to identify instances of suspected counterfeit parts. While some DOD entities have developed their own definitions, these can vary in scope. Further, two DOD databases that track deficient parts--those that do not conform to standards--are not designed to track counterfeit parts. A third governmentwide database can track suspected counterfeit parts, but according to officials, reporting is low due to the perceived legal implications of reporting prior to a full investigation. Nonetheless, officials we met with across DOD cited instances of counterfeit parts, as shown in the table below. A recent Department of Commerce study also identified the existence of counterfeit electronic parts within DOD and industry supply chains. DOD is in the early stages of developing a program to help mitigate the risks of counterfeit parts. DOD does not currently have a policy or specific processes for detecting and preventing counterfeit parts. Existing procurement and quality-control practices used to identify deficient parts are limited in their ability to prevent and detect counterfeit parts in DOD's supply chain. For example, several DOD weapon system program and logistics officials told us that staff responsible for assembling and repairing equipment are not trained to identify counterfeit parts. Some DOD components and prime defense contractors have taken initial steps to mitigate the risk of counterfeit parts, such as creating risk-assessment tools and implementing a new electronic parts standard. Also facing risks from counterfeit parts, individual commercial sector companies have developed a number of anticounterfeiting measures, including increased supplier visibility, detection, reporting, and disposal. Recent collaborative industry initiatives have focused on identifying and sharing methods to reduce the likelihood of counterfeit parts entering the supply chain. Because many of the commercial sector companies produce items similar to those used by DOD, agency officials have an opportunity to leverage knowledge and ongoing and planned initiatives to help mitigate the risk of counterfeit parts as DOD develops its anticounterfeiting strategy.
Medicare is a health insurance program available to almost all people 65 years of age and older and to certain disabled people. The program provides protection under two parts. Part A, the hospital insurance program, covers inpatient hospital, skilled nursing facility, home health, and hospice services. Part B, the supplementary medical insurance program, primarily covers physician services but also covers home health care for beneficiaries not covered under part A. Although most of the 38 million Medicare beneficiaries receive their health care from fee-for-service providers, the nearly 5 million beneficiaries enrolled in HMOs participating in Medicare’s risk-contract program receive home health care through their HMOs. To qualify for home health care, a Medicare beneficiary must be homebound, that is confined to his or her residence; require intermittent skilled nursing, physical therapy, or speech therapy; and be under the care of a physician. In addition, the services must be furnished under a plan of care that is prescribed and reviewed at least every 62 days by a physician. If these conditions are met, Medicare will pay for skilled nursing; physical, occupational, and speech therapies; medical social services; home health aide visits; and durable medical equipment and medical supplies. As long as the care is reasonable and necessary and meets the above criteria, there are no limits on the number of home health visits or length of coverage. The home health benefit is one of the fastest growing components of Medicare fee-for-service spending. From 1989 to 1996, part A fee-for-service expenditures for home health increased more than 600 percent—from $2.4 billion to $17.7 billion. The number of beneficiaries receiving home health care more than doubled, from 1.7 million in 1989 to about 3.9 million in 1996. While the Congress liberalized the Medicare home health benefit in 1980, the dramatic growth in these services is primarily the result of changes to HCFA’s home health guidelines made in 1989. HCFA was ordered by a federal court to make these changes in response to a court decision that invalidated HCFA’s interpretation of the coverage requirements. The 1980 statutory amendments removed the requirements that home health visits under part A be preceded by a hospital stay of at least 3 days and be for a condition related to the hospitalization. The amendments also abolished the 100-home-health-visit limitation under parts A and B. The new guidelines issued in 1989 allowed home health agencies to increase the frequency of visits by clarifying the definition of “part-time” or “intermittent” care, making it easier to qualify for skilled care, and increasing the standard of review before payment for services could be denied. These changes made the home health benefit available to more beneficiaries, for less acute conditions, and for longer periods of time. Under Medicare fee-for-service, providers are paid for each home health visit and, except for durable medical equipment, beneficiaries do not share in the cost. Therefore, neither providers nor beneficiaries has financial incentives to control the number of services used. At the same time that home health expenditures have been growing rapidly, funding for program safeguards, such as reviewing claims, decreased sharply. The recent enactment of the Health Insurance Portability and Accountability Act of 1996 (P.L. 104-191) has increased future funding for program safeguards. After adjusting for inflation, however, per-claim expenditures for program safeguards will remain below the 1989 level. HCFA has recently taken several steps to address the growing problem of home health fraud, such as a temporary moratorium on the entry of new home health agencies into Medicare while the agency reviews its requirements for home health agencies to enter and remain in the program. Medicare risk-contract HMOs are paid a fixed amount per month per beneficiary under a payment method known as capitation. This method places HMOs at risk for health costs that exceed this capitated amount, giving them a financial incentive to provide fewer services, emphasize preventive care, and avoid unnecessary care. As of August 1, 1997, almost 4.9 million Medicare beneficiaries, or more than 12 percent, were enrolled in risk-contract HMOs. Medicare HMOs are required to provide the complete health benefit package available under the fee-for-service program, but they can choose to provide more services. For instance, while Medicare fee-for-service requires that patients be homebound to qualify for home health services, Medicare HMOs can waive this restriction. In addition, HCFA guidance states that the HMO is allowed to direct the delivery of care. In contrast, a patient in fee-for-service may, in consultation with a physician, seek home health services without obtaining authorization from a third-party—a requirement most HMOs impose. Medicare patients may appeal an HMO refusal to provide health services they believe are covered or medically necessary. If a patient appeals such a denial, the HMO must reconsider its initial decision. If the HMO’s reconsideration is not fully favorable to the patient, the HMO must forward the appeal for independent review by a HCFA contractor—the Center for Health Dispute Resolution, formerly the Network Design Group—which makes the final reconsideration decision. If dissatisfied with this decision and the amount in dispute is $100 or more, HMO patients can take their appeals to an administrative law judge, as can fee-for-service patients. Contrasting financial incentives and different interpretations of the Medicare home health benefit have led to some divergence in the way home health services are used by HMO and fee-for-service providers. Staff at the six HMOs and the eight home health agencies we reviewed described different approaches for home health services used by HMO and fee-for-service providers. The reports from the two groups suggest that these HMOs emphasize shorter-term, rehabilitation goals, while fee-for-service providers may give more emphasis to social and environmental factors affecting service needs, especially in their use of home health aides. The coverage criteria for Medicare’s home health benefit allow providers enough latitude to interpret the criteria in a manner that favors their financial interests. While HMOs control services more closely than fee-for-service providers, home health agencies that serve both HMO and fee-for-service patients told us they were generally able to obtain approval to provide services they considered sufficient to HMO enrollees. Some home health agency staff did express concerns about the HMOs’ approaches to home health care; however, home health agency staff also acknowledged that fee-for-service patients sometimes receive unnecessary services. Home health agency staff described HMOs as having a somewhat different approach to home health than fee-for-service providers. They told us that HMOs tend to focus more on shorter-term goals that allow the HMO to discontinue services as soon as possible. Staff at several HMOs we visited reported that their goal for home health services is to help patients function independently and not rely on home health care. To do so, they establish specific rehabilitation goals focused on a patient’s needs. For instance, if a patient needs to climb six stairs to reach the bathroom at home, then the home health therapist will focus on this goal. Once the patient attains the specific goal, HMOs may terminate home health services if the patient does not require any other skilled nursing or skilled therapy care. Home health agencies also seek to achieve independence for their fee-for-service patients. However, in contrast to HMOs, some home health agencies reported taking a broader approach to patient functioning, providing additional services—especially supportive or aide services—that take into consideration the patient’s overall condition and environment. With fee-for-service patients, home health agency staff said they tend to provide services over a longer period to ensure patients are fully healed and knowledgeable about the medical condition involved. In contrast, they said an HMO may authorize the home health agency only a certain number of visits to teach a patient about his or her other medical condition, even if environmental factors, such as family stress, suggest that the patient may have difficulty absorbing the information within the HMO’s time frame. A nurse manager in one home health agency explained that under managed care, home health agencies are learning to focus on the problem at hand rather than trying to give patients services for unrelated or other chronic conditions. She explained that in fee-for-service, the home health agency’s goal has been to resolve every condition that a patient had. For instance, if home health services were initiated because a diabetic patient had a wound that required skilled nursing care, a home health agency might review educational materials about diabetes with the patient, even if the patient had had diabetes for a number of years. In contrast, HMOs tend to focus on the specific condition that initiated the home health episode. Because HMOs are at risk for service costs that exceed the capitated payment, they generally seek to provide enough services to maintain or restore patient health and prevent the need for more expensive care, while not providing more care than necessary. While there are financial incentives to limit services, discontinuing services too soon could become more costly if patient conditions worsen. Balancing these financial and health interests can influence the use of home health services. For example, an HMO may not believe it necessary for a home health nurse to continue to visit a wound patient until the wound is completely healed, while a fee-for-service provider may. Applying the definitions of skilled services is not always straightforward and is based on clinical judgment in many cases. For example, the management of a care plan is considered a skilled service if it requires the skills of a nurse or therapist to ensure the patient’s medical safety and recovery—even if all other services in the care plan are unskilled. Since such criteria are based on judgment and are open to interpretation, providers faced with borderline cases may make decisions that favor their financial interests. The executive director of one home health agency noted that the definitions for certain types of skilled nursing and therapy services are vague and inconsistently interpreted in fee-for-service. The director for admissions at another home health agency said that there are always gray areas in the coverage guidelines and that fee-for-service providers tend to provide more services, while HMOs tend to provide fewer. HMOs report that they use their flexibility to provide additional benefits or waive Medicare requirements for their Medicare enrollees to provide more cost-effective care. In general, the Medicare HMOs we visited reported that they occasionally covered more benefits than patients are entitled to in the Medicare fee-for-service program. For example, one HMO did not require that patients be homebound to receive home health services. Four other HMOs reported that while they formally required patients to be homebound, they would make exceptions if it would be cost-effective for the HMO and beneficial for the patient. In addition, two HMOs reported that if a patient had no skilled need, but could not be at home without assistance, they would, in rare cases, provide aide services for a short period until other arrangements could be made. HMO and fee-for-service providers also differ in their use of home health aides. While custodial care—personal care that does not require the continued attention of trained professional staff—is generally excluded from Medicare coverage, Medicare can cover a home health aide to provide ongoing personal care services if the home care patient also requires intermittent skilled nursing or therapy services. Prior to the 1980 statutory changes and the 1989 court-ordered coverage guideline changes, the part A home health benefit had been used primarily for acute conditions following a hospitalization and not for chronic care. Many Medicare fee-for-service patients still receive home health services following hospitalization, but a growing number are receiving home care and aide services for long-term, chronic conditions not related to an acute episode. In a recent briefing, we reported that in the fee-for-service program, aide visits accounted for almost half the total of home health visits in 1994 and that the percentage of patients receiving more than 90 visits tripled between 1989 to 1993, from 6 to 18 percent. In contrast, HMO staff told us they believe that Medicare home health services should not be expected to be used as long-term care for patients. Staff at many of the HMOs we visited expressed the belief that patients can become dependent on the assistance provided by aides and expect such services indefinitely. In their view, the fee-for-service system sometimes blurs the line between skilled and custodial care, creating unrealistic patient expectations about eligibility for Medicare home health services. In addition, some HMO and HCFA staff expressed the belief that home health aides are sometimes provided in the fee-for-service program as much for social reasons as for health reasons. A study of Medicare home health claims from 1993 also suggested that many fee-for-service aide visits may be for social and custodial care and only tangentially related to medical care. While the HMOs we visited generally do not provide home health aides for custodial purposes, most had a social service department or designated staff that would try to arrange for community services. Several HMOs also had special programs that provided supportive social services not directly related to health. For example, one HMO provided a respite benefit to full-time caregivers in the home to prevent caregiver burnout. Another HMO received a grant from a health care foundation to create a service credit bank, where enrollees who provide assistance, such as meal preparation and transportation, to frail enrollees are given credits that can be used to purchase similar assistance when needed. The same HMO also helps enrollees access a friendly visitor program and a telephone reassurance program to provide social interaction and support. While these alternative services do offer some assistance to patients, they are unlikely to completely replace all of the personal care services that a home health aide can provide, such as assistance with bathing and dressing. Staff from several home health agencies noted that they have changed the way they treat fee-for-service patients by adopting an approach more compatible with that used by HMOs. They explained that they do not want to treat patients differently based solely on health insurance status and acknowledged that under fee-for-service, some patients may receive unnecessary care. One home health agency noted that it now puts more emphasis on patient education, while another reported that it no longer seeks to attain maximum functional levels for patients before they are discharged from home health care. The latter also noted that it now provides services for shorter periods and it looks for community resources to provide assistance if a patient needs long-term assistance with some tasks, such as preparing insulin shots. Home health agency staff also told us that although they were usually able to negotiate acceptable levels of service with HMOs, HMOs occasionally “push the envelope” in terms of providing the fewest possible services. Some were concerned that HMOs occasionally have unrealistic expectations about how quickly certain patients can function independently and may lead the patient to do more than he or she is able to do. For example, one home health agency reported that a local Medicare HMO, which was not part of our sample, may expect too much from the elderly population. The HMO has recommended clinical guidelines for coronary artery bypass surgery that call for patients to be discharged 4 days after surgery and only authorizes one home health agency visit following discharge. Because these patients generally are overwhelmed by the surgery and recovery, few can absorb all the necessary self-care information provided in this one visit. As a result, home health agency staff said that they have begun doing follow-up calls to these patients on their own initiative. Other home health agencies noted that some HMOs may require certain wound care patients to provide their own wound care before they are able to. At the same time, some home health agencies noted beneficial changes in patient management that they believe arose from the influence of managed care. The director of one home health agency said that working with HMOs has taught her staff to develop reasonable, measurable goals and to focus their care on those goals. She believes that as a result, the quality of care provided has improved. The patient care coordinator at another home health agency noted that the agency is now more focused on functional outcomes and patient education. The six Medicare HMOs we visited frequently review each home health patient’s condition and progress; four also require preauthorization for home health services. This close management is intended to monitor both the cost and quality of care provided. In contrast, only a small percentage of claims in the fee-for-service program are actually reviewed by Medicare to assess whether they are reasonable and necessary. Moreover, these reviews are primarily paper reviews, which yield insufficient information to determine if the services provided are appropriate and meet Medicare criteria. Many fee-for-service home health agencies seek to manage patient care appropriately and cost effectively, but others may provide unnecessary services. As we reported in March 1996, inadequate controls make it nearly impossible to know whether a patient receiving home health care qualifies for the benefit, needs the care being delivered, or even receives the services being billed to Medicare. To more actively manage home health services, the HMOs we visited use case management and preauthorization strategies, utilization reviews, and selective contracting. Each of the six HMOs that we visited use nurse case managers to follow each patient’s progress and to determine when services can be discontinued. At two of the HMOs, the case managers operate out of a central office separate from the physician offices. The managers receive patient information, collaborate with physicians as needed by phone, and approve or disapprove requested services. At two other HMOs, the case managers work within the physician offices and make decisions about services to be provided in collaboration with the primary care physician. The case managers at the two remaining HMOs coordinate services but are not responsible for approving service levels because these HMOs do not have a preauthorization requirement. Staff from the home health agencies report that the HMO case managers review patient care plans much more frequently than the home health agencies review plans for their fee-for-service patients. At each HMO we visited, case managers generally review patient cases every few days to 2 weeks, depending on the patient’s condition, to determine how much more care is needed. In the Medicare fee-for-service program, home health care plans must be reviewed by a physician at least every 62 days. While some home health agencies may develop shorter care plans, others routinely develop 62-day care plans for their fee-for-service patients. Moreover, when the initial 62-day period ends and a new care plan is written, the Medicare contractors who process fee-for-service home health claims do not routinely review the updated plans. HMO staff reported that their closer scrutiny of each patient is intended to both prevent the unnecessary utilization of services and improve the quality of care. However, contracted home health agencies also noted that the scrutiny can sometimes be excessive and believe that it would save providers time and effort if they did not have to seek approval for care after two or three visits when it is obvious that certain patients, such as stroke patients, need additional visits. At one home health agency, a staff member noted that there is a difference between managing utilization and actually managing care. She noted that some HMOs focus more on managing utilization and have no direct contact with patients, which precludes them from assessing the individual needs of patients. Medicare HMOs vary in terms of their organization, payment mechanisms for physicians and home health agencies, and authorization processes. These factors also influence the utilization levels and management of home health services. For example, some HMOs employ their own physicians and nurses and have no preauthorization requirements for home health services; however, many HMOs contract with large numbers of independent physicians and have more restrictive preapproval processes to control the use of services. Similarly, an HMO that pays for home health services on a capitated basis may have fewer controls on the use of services than an HMO that pays for each home health visit provided. In addition to using case managers to review and approve care, HMOs sometimes review aggregate data—such as utilization statistics, patient satisfaction survey data, or rehospitalization data—to monitor quality and identify possible aberrant utilization patterns. For example, one HMO monitors its contracted physician groups for underutilization and overutilization of services, using established benchmarks or HMO averages. The HMO identified one medical group with low utilization of home health services compared to the HMO average and asked the group to explain the disparity and provide any available information on patient satisfaction or patient outcomes. Another HMO has established screens, such as dehydration or readmission to a hospital, to identify instances of poor patient outcomes. If a provider has five or more instances during a 3-month period (for instance, five patients suffering from dehydration), the HMO will review the provider to determine if there are quality of care problems. However, if immediate action appears warranted, a physician may review cases sooner. HMOs also manage home health care more closely by restricting the number of home health agencies they use or by having common corporate ownership of agencies used. Two of the HMOs we visited share common corporate ownership with one or more home health agencies that provide services almost exclusively to the HMOs’ enrollees. This arrangement allows HMO and home health agency staff to work closely with each other to provide active oversight of the care provided. Two other HMOs are in the process of shrinking their home health agency networks to allow their staff to spend more time on site at these facilities, provide closer oversight of the care provided, and work with the contractors to manage enrollee care. One HMO reduced the number of home health agencies it contracted with from over 80 to only 2. Most of the HMOs also are establishing formal processes for credentialing home health contractors. Three recently published studies on home health use and our review of selected home health agencies provide evidence that Medicare HMO patients receive fewer home health visits than Medicare fee-for-service patients. These differences in utilization likely stem from HMOs’ more active management of home health services and greater emphasis on rehabilitation and acute care, along with a lack of controls in the fee-for-service program and reported problems with overutilization. Underlying differences in the health status of the two populations may also contribute to these differences. Several studies suggest that, on average, Medicare beneficiaries who enroll in HMOs may be healthier than patients who remain in the Medicare fee-for-service program and, consequently, use fewer services. One study, which compared the use of home health services by frail elderly Medicare patients in HMOs and fee-for-service, found that—after adjusting for differences in demographic, physical, mental, and functional status—HMO patients were just as likely to have home health episodes as fee-for-service patients but received 71 percent fewer visits. A second study, conducted by the Department of Health and Human Services’ (HHS) Office of the Inspector General, found substantially fewer home health visits provided to Medicare HMO enrollees in 1994; however, the study did not adjust for differences in patient health and demographic status. A third study, funded by HCFA, found that Medicare HMO and fee-for-service patients received home health services for similar lengths of time; however, HMO patients averaged 13 visits per episode of care, while fee-for-service patients averaged 20 visits. Further analysis indicated that HMO patients received fewer home health services than similar fee-for-service patients, even after adjusting for differences in functional status, medical condition, and demographic factors. Home health agency staff generally agreed with these findings. Virtually all said that their HMO patients overall receive fewer services than fee-for-service patients. In particular, they described sizable differences in the use of home health aides. Some home health agency staff also said HMO patients may receive less skilled care services, such as therapy services. In some cases, they attributed lower utilization of aides to earlier termination of home health services by HMOs. One large urban home health agency compared its 1996 Medicare fee-for-service and Medicare HMO patients and found statistically significant differences in use. When fee-for-service patients were matched with HMO patients for age and gender, the HMO group had fewer total visits and fewer visits for most service types—including physical therapy and skilled nursing—as well as shorter episodes of care, fewer comorbidities, and somewhat different diagnostic groupings. (See table 1.) Because the number of visits per week by service type were generally similar for the two groups, these overall utilization differences likely stem from the fact that HMO patients generally received services over a shorter period relative to fee-for-service patients. When the analysis was restricted to patients with a primary diagnosis involving the circulatory system, the home health agency found that differences in the total number of visits increased with the length of the care episode. (See tables 2 and 3 for a summary of this comparison.) HMO patients were almost twice as likely to have a shorter episode of care. For the shortest episodes of care (under 31 days), there were relatively small, and not statistically significant, differences in the number of home health services between the fee-for-service and HMO patients. Greater differences, especially in the use of aides, were found for patients with longer episodes of home health care. A recent analysis by the Kaiser Family Foundation indicated that many Medicare fee-for-service home health patients are sick and functionally impaired and increasingly rely on home health services to fulfill long-term care or complex medical needs. The analysis found only about one-third of fee-for-service home health users were receiving home health services after hospital discharge to meet a short-term, post-acute need. The remaining two-thirds received more visits over a longer period. Half of this group were seriously ill, had complex medical problems, and used more hospital care than other fee-for-service home health users. The other half were medically stable but functionally impaired and used home health care, especially aide services, to meet long-term care needs. Information is not available on either the prevalence of chronically ill beneficiaries who enroll in HMOs or their receipt of services. Therefore, the effect of HMOs’ emphasizing short-term rehabilitation and functional improvement on service utilization by chronically ill beneficiaries is unknown. Currently, HCFA has little data on home health services provided by HMOs to Medicare enrollees. Without information on the care provided, HCFA cannot target plans or patient groups for further review. Home health agency and HCFA staff told us that it is difficult to evaluate the significance of home health care utilization differences between managed care and fee-for-service settings without comparative data on patient outcomes—information that links the care provided to the patient’s health status. HCFA has initiatives under way to collect some information on patient outcomes from home health services, but that data will not be available for some time. In their absence, we reviewed a sample of appeals cases to see if these data reveal any systemwide issues regarding access to care. However, because of the low numbers of appeals and their focus on administrative rather than clinical issues, these data offered little insight regarding HMOs’ provision of home health care. HCFA has little information about how much or what types of home health care HMO enrollees are receiving. Therefore, HCFA cannot use indicators, such as low utilization levels, to target patient groups or plans for more detailed review. Because HMOs are paid on a capitated basis to provide all Medicare-covered services to enrollees, HCFA does not receive claims for the services provided. In addition, HMOs are not required to provide data on utilization levels for home health services. While HCFA reviews Medicare HMO performance at least every 2 years, these reviews do not specifically target home health care. As we noted in 1995, HCFA’s routine reviews focus on whether the HMO has capable staff and appropriate procedures for quality assurance and utilization management, rather than whether the quality assurance and utilization management systems actually operate effectively and ensure that HMOs make appropriate care decisions. At the same time, there are currently few, if any, generally accepted standards for home health care, which could be useful in evaluating any utilization data or other information about care provided to Medicare enrollees. Although HCFA and home health agency staff told us that it would be impossible to evaluate the significance of utilization differences without data on patient outcomes, comparative information on utilization levels could be a useful monitoring tool. Utilization data can be used to identify home health agencies, HMOs, or patient groups whose atypical utilization may indicate quality of care problems and thus enable HCFA to target potential problem providers for further review and analysis. For example, at least two state Medicaid programs use encounter data as an indicator of potential under- or overutilization of services. In the Medicare fee-for-service program, this technique has been used successfully to identify providers with fraudulent or abusive billing practices. HCFA is currently collecting encounter data in one state as a pilot project but has no definitive plan to collect these data on a nationwide basis. To date, research comparing the health outcomes of HMO and fee-for-service patients has been limited, partly because of the difficulty in defining and measuring an array of health outcomes that consider both skilled and unskilled services. The 1995 HCFA-funded study comparing home health utilization of Medicare HMO and fee-for-service patients was the only study we identified that attempted to measure patient outcomes. The results suggest that HMO patients may experience slightly worse outcomes than fee-for-service patients. However, because the study includes only patients who were beginning a home health episode and only followed them for 12 weeks, it may not include many patients receiving home health services for chronic conditions. HCFA recently announced that within the next few years it plans to collect some outcomes data from all home health agencies that provide care to Medicare HMO or fee-for-service patients through a standardized patient assessment data set, known as OASIS (Outcomes and Assessment Information Set). The OASIS data set will collect information on a number of health status measures, such as ability to walk after hip replacement surgery; mental status; and ability to perform activities of daily living, like bathing or eating. HCFA may use OASIS data to monitor HMOs and the effectiveness of home care they provide. Patients with chronic illnesses and conditions, however, may not experience the types of substantial improvements or restoration of functions that can be measured easily through such outcomes data. The needs of the chronically ill for ongoing assistance to maintain health status and functional ability may also conflict with medical necessity standards used by some managed care plans that focus on rehabilitation. Some state Medicaid programs have recognized similar concerns in contracting with managed care plans for disabled recipients. They have included an explicit definition of medical necessity in HMO contracts that includes services necessary to maintain a patient’s existing level of functioning. Data on the number and results of appeals filed by Medicare patients who are dissatisfied with HMO care decisions are one of the few currently available indicators that might be useful in evaluating HMO home health care. We reviewed 48 home health appeals filed by Medicare HMO patients during a 2-1/2-year period and found that HCFA’s appeals contractor upheld most of the HMOs’ denials. However, the usefulness of such data as an indicator of patient satisfaction may be limited by several factors. First, the small number of home health appeals limit their reliability as an indicator. In 1996, HCFA’s appeals contractor received only 165 appeals involving home health services from the approximately 4 million Medicare beneficiaries enrolled in risk-contract HMOs. Second, in 60 percent of the cases we reviewed, the HMO appeals contractor decided the case based on whether the HMO and the patient followed correct administrative procedures, rather than the appropriateness of the HMO’s clinical decision or the sufficiency of the services provided. Finally, because of weaknesses in the appeals system—including incomplete HMO compliance with the appeals process, limited enrollee awareness of appeal rights, and beneficiaries’ ability to disenroll rather than appeal a denial—not all enrollee concerns about access to home health care reach the appeals contractor. HMOs’ more active management of home health services and their focus on shorter-term rehabilitation likely contribute to their Medicare enrollees receiving fewer services than their fee-for-service counterparts. Currently, however, HCFA has little data available to evaluate if differences in home health care utilization are appropriate. Given the growth in Medicare HMO enrollment, ensuring that HMOs meet the home health needs of all enrollees, particularly those with chronic conditions, will become increasingly important. HCFA plans to collect outcomes data for home health services; however, this information will not be available for several years and may provide only a partial picture of the care provided by HMOs. Still, without such data, it is difficult to determine to what extent utilization differences are appropriate or represent unnecessary services provided in fee-for-service or insufficient services provided by HMOs. In the meantime, HCFA cannot determine whether the needs of particularly vulnerable beneficiaries—such as those with medically complex needs and chronic conditions—are being met in HMOs. While there are no generally accepted standards regarding the appropriate level of services for home health patients, identifying and reviewing HMOs and patient groups with aberrant utilization patterns could help focus oversight on potential problems—a technique that has been used successfully in the Medicare fee-for-service program. In addition, recognizing the unique needs of chronically ill enrollees and defining expectations for their care may assist beneficiaries with chronic conditions in deciding whether to enroll in an HMO, as well as facilitate HCFA’s oversight of the care provided these enrollees. We provided a draft of this report to HCFA officials, who suggested that we clarify that HCFA’s 1989 changes to its home health coverage regulations were made in response to statutory changes and court order. We have clarified those sections of the report and made other technical changes recommended by HCFA officials. In addition, we provided a draft of this report to each of the HMOs we visited, the Center for Health Dispute Resolution, the National Association for Home Care, the American Association of Health Plans, and two of the home health agencies we interviewed. Most provided technical or clarifying comments, which we incorporated as appropriate. The National Association for Home Care expressed concern that some HMOs use restrictive policies that conflict with what Medicare beneficiaries are entitled to receive under the Medicare home health benefit. The limited scope of our study precluded us from addressing this issue. While we did note some differences in the provision of home health services by HMO and fee-for-service providers, we did not collect information that would allow us to comment on the appropriateness of care offered to the two groups of patients. As agreed with your office, unless you release its contents earlier, we plan no further distribution of this letter for 30 days. At that time, we will send copies to other interested parties and make copies available to others on request. This report was prepared by Sara Galantowicz and Michelle St. Pierre, under the direction of William Reis, Assistant Director. Please call me at (202) 512-7114 or Mr. Reis at (617) 565-7488 if you or your staff have any questions about the information in this report. To collect information on how Medicare HMOs manage home health services, we visited six Medicare HMOs, conducted phone interviews with home health agencies that contracted with these HMOs to provide home health services, and reviewed appeals from Medicare HMO enrollees who were denied home health services. We interviewed staff from HCFA’s central office and several of its regional offices. We also reviewed pertinent laws, regulations, HCFA policies, and research comparing utilization and outcomes between Medicare HMO and fee-for-service patients. We conducted our study from March 1996 to July 1997 in accordance with generally accepted government auditing standards; however, we did not independently verify the utilization data obtained from one home health agency. The 6 HMOs we visited accounted for about 10 percent of all Medicare enrollees in the 292 risk-contract Medicare HMOs as of August 1, 1997. We chose the specific HMOs to include a variety of HMO models and a variety of contracting relationships with home health agencies, but they should not be considered representative of all Medicare risk-contract HMOs. Three of the six HMOs were nonprofit and three were for-profit. Two were group/staff model HMOs, two were independent practice association (IPA) models, and two represented mixed IPA/group models. Two HMOs shared common corporate ownership with the home health agencies that provided essentially all home health services for the HMOs’ Medicare enrollees. The remaining HMOs contracted with a variety of independent home health agencies. In selecting HMOs, we also sought some geographic diversity—three of the HMOs are on the East Coast and three are on the West Coast. Given the number and diversity of HMOs and home health agencies that participate in the Medicare program, we cannot generalize from the small number that we visited. At each HMO we interviewed case managers, utilization review staff, quality assurance staff, and other knowledgeable staff about how the HMO manages home health services. At one HMO, which capitates payments to its physician groups and delegates the utilization management function to the physicians, we also interviewed case managers at two of the contracted physician groups. We also interviewed staff at 10 home health agencies that provide services to the HMOs we visited to discuss the management of Medicare HMO home health patients compared to Medicare fee-for-service patients; 8 of the 10 provided services to both. In most cases, we interviewed at least two home health agencies that contracted with the HMOs we visited—some of which contracted with more than one of the HMOs. Finally, we reviewed a sample of appeals filed by Medicare HMO patients and decided by HCFA’s HMO appeals contractor, the Center for Health Dispute Resolution (CHDR). The Medicare HMO appeals process is a two-step process, in which the HMO itself first reconsiders its original denial. If the HMO’s reconsideration is not fully favorable to the beneficiary, the HMO is required to forward the appeal to CHDR to make the final reconsideration decision. We did not review HMO-level appeals because HCFA does not maintain data on appeals at that level, making it impossible to identify the universe of appeals and to draw a sample. However, the six plans we visited reported that nearly all appeals in the past year involving home health services were forwarded to CHDR. From a universe of 254 home health appeals decided by CHDR between January 1, 1994, and August 23, 1996, we selected a random sample of 48 cases, or 18.9 percent of the 254 cases involving home health. The appeals came from all Medicare HMOs, not just the six we visited. While this sample is representative of all CHDR-level appeals cases decided during the sample time frame, it should be noted that the appeals that reach CHDR represent only a fraction of all disputes because not all initial HMO denials are appealed or even recognized, and others may be overturned at the plan level. As noted in the body of this report, HMO patients may choose not to appeal an HMO denial, either because they are not aware of their appeal rights or because they choose to disenroll from the HMO. Also, Medicare HMOs do not always forward appropriate appeals to HCFA’s contractor, as reported in a recent HHS, Office of the Inspector General study. Medicare Home Health Agencies: Certification Process Is Ineffective in Excluding Problem Agencies (GAO/T-HEHS-97-180, July 28, 1997). Medicare: Need to Hold Home Health Agencies More Accountable for Inappropriate Billings (GAO/HEHS-97-108, June 13, 1997). Medicare HMOs: HCFA Can Promptly Eliminate Hundreds of Millions in Excess Payments (GAO/HEHS-97-16, Apr. 25, 1997). Medicare: Home Health Cost Growth and Administration’s Proposal for Prospective Payment (GAO/T-HEHS-97-92, Mar. 5, 1997). Medicare: Home Health Utilization Expands While Program Controls Deteriorate (GAO/HEHS-96-16, Mar. 27, 1996). Medicare: Increased HMO Oversight Could Improve Quality and Access to Care (GAO/HEHS-95-155, Aug. 3, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO provided information on home health services provided by Medicare health maintenance organizations (HMO), focusing on: (1) how Medicare HMOs provide and manage home health services, as compared to fee-for-service providers; and (2) what is known about the appropriateness of home health services provided to HMO enrollees, especially to vulnerable populations. GAO noted that: (1) since the late 1980s, when the Congress and the courts liberalized Medicare coverage of home health services, the contrasting financial incentives of HMO and fee-for-service providers have led to some divergence in the use of these services; (2) fee-for-service providers generally have responded to the increased latitude in the home health benefit by providing more patients with more services for longer periods, in some cases providing excessive services; (3) in contrast, home health agencies and HMOs tend to emphasize shorter-term recuperation and rehabilitation goals--much as fee-for-service provider did prior to the changes in coverage guidelines; (4) differences between HMO and fee-for-service providers are most apparent in the use of home health aides; (5) in the fee-for-service program, the use of home health aides to provide long-term care for patients with chronic conditions is growing, whereas the six HMOs GAO visited report that they do not provide aide services on a long-term basis; (6) typically, Medicare HMOs manage home health care much more actively than the fee-for-service program; (7) in contrast, the fee-for-service program has less effective controls for preventing unnecessary and noncovered services; (8) home health utilization differs between HMO and fee-for-service patients; (9) the greater emphasis on short-term goals and the more active management of care by HMOs likely contribute to shorter episodes of care and the use of fewer home health visits, especially by home health aides; (10) in addition, data from one managed care market suggest utilization differences are more pronounced for longer-term home health patients; (11) given the approach to home health care by some Medicare HMOs, including a greater focus on post-acute needs, Medicare beneficiaries with long-term care needs and chronic illnesses enrolled in HMOs may not receive the same services as they would in fee-for-service Medicare; (12) although there are these differences in utilization, the Health Care Financing Administration (HCFA) does not have the information it needs to evaluate the home health care patients receive in either the HMO or fee-for-service program; (13) HCFA does not review home health care during monitoring visits to HMOs; and (14) HCFA plans to collect some outcomes information, but it will not be available for some time.
Under the 1958 Geneva Convention on the High Seas and the United Nations Convention on the Law of the Sea (UNCLOS), piracy consists of any of several acts, including any illegal act of violence or detention, or any act of depredation, committed for private ends by the crew or the passengers of a private ship and directed against another ship, aircraft, persons, or property onboard another ship on the high seas, or against a ship, aircraft, persons or property in a place outside the jurisdiction of any state. According to both conventions, all states have the duty to cooperate to the fullest extent possible in the repression of piracy on the high seas or in any other place outside the jurisdiction of any state and are authorized to seize pirate ships or a ship under the control of pirates and arrest the persons and seize the property onboard on the high seas or in any other place outside the jurisdiction of any state. When crimes that would constitute piracy are committed in the territorial waters of a coastal state, they are generally referred to as maritime crime. For the purposes of this report, we describe the criminal conduct in the Gulf of Guinea as piracy and maritime crime in order to include piracy on the high seas (i.e., outside the jurisdiction of any one sovereign state), as well as hijacking, armed robbery, kidnapping, and attempts at these crimes within the territorial waters of a state. Piracy and maritime crime off the Horn of Africa and in the Gulf of Guinea affect countries around the globe. In 2013, over 42,000 vessels transited the waters off the Horn of Africa, which include some of the world’s busiest shipping lanes. Within these waters, pirates target merchant vessels, fishing ships, and dhows. Since 2008, the UN has adopted a number of United Nations Security Council resolutions related to countering piracy off the Horn of Africa. Similarly, in 2011 and 2012, recognizing the Gulf of Guinea’s critical shipping and global energy resources, the UN adopted resolutions that expressed deep concern about the threat that piracy and armed robbery at sea in the Gulf of Guinea pose to international navigation, security, and the economic development of states in the region. The types of crime, vessel traffic, and coastal states’ jurisdictional responses to address the piracy problem off the Horn of Africa and in the Gulf of Guinea generally differ, as does the U.S. response. DOD and State officials described the following as key differences: Types of crime: Piracy off the Horn of Africa is generally characterized by ransom-seeking, in which pirates attack ships for their crew, cargo, or the ship itself, which are often held hostage for months or years to obtain millions of dollars in ransom. In the Gulf of Guinea, piracy is generally characterized either as armed robbery—such as petroleum tanker hijackings to steal a ship’s oil—or targeted kidnappings for ransom near or within the Niger Delta, according to DOD officials. Additionally, unlike the hostage-taking and high-dollar ransoms off the Horn of Africa that can result in months or years that a vessel and its crew are held, the kidnappings off the Niger Delta are for days or weeks, for thousands of dollars in ransom, and do not necessarily involve the hijacking of a vessel. In general, pirates hijack tankers and their crew only for the time it takes to offload the oil. Vessel traffic: The nature of how vessels travel through the regions also differs. Sea traffic off the Horn of Africa is characterized by large, high-speed cargo vessels transiting through the Gulf of Aden and Indian Ocean. Piracy in this region generally involves pirates pursuing and boarding moving vessels. In contrast, in the Gulf of Guinea, commercial vessels generally are smaller and operate closer to shore, slowing down to make port calls and stopping at off-shore facilities in territorial waters or in the exclusive economic zones of coastal states. The slow speeds and stationary positions make these vessels vulnerable to piracy and maritime crime. Jurisdiction and response: U.S. efforts to combat piracy off the Horn of Africa and maritime crime in the Gulf of Guinea evolved in response to the particular characteristics of piracy and maritime crime in each region and the extent to which the United States has jurisdiction and coastal states have the capability to respond. For example, the UN authorized international militaries and organizations to enter Somali territorial waters and economic zones to conduct counterpiracy operations and patrols as though they were international waters. The transitional and new Somali governments have relied on the assistance of international militaries since they are building maritime security capacities. Conversely, in the Gulf of Guinea, maritime security in territorial waters is under the authority of the respective recognized national governments in the region. Figures 1 and 2 show the number of attempted and successful pirate attacks off the Horn of Africa and in the Gulf of Guinea respectively from 2010 through 2013. Click on the to show that year’s attacks, click on the to hide that year. For a noninteractive version see appendix II. In addition to the types of crimes, vessel traffic, and jurisdiction, other characteristics such as the reporting of incidents by vessel owners and operators and the ability of pirates to use land based safe havens for operations, among others, create differences between piracy off the Horn of Africa and piracy and maritime crime in the Gulf of Guinea. These differences are summarized in table 1. Since 2008, the international community has taken steps to respond to piracy off the Horn of Africa, including patrols by the United States, NATO, the European Union (EU), and others in waters near Somalia; the establishment of international naval task forces with specific mandates to conduct counterpiracy operations; and the formation of a voluntary multilateral Contact Group to coordinate international counterpiracy efforts such as the development of industry practices and coordination of international law enforcement efforts. Recognizing that vibrant maritime commerce underpins global economic security and is a vital national security issue, the United States has also developed policies and plans to collaborate with its international partners and to mobilize a U.S. interagency response. In December 2008, the NSC published the Action Plan, which discusses countering piracy emanating from Somalia. The Action Plan directed the Secretary of State and Secretary of Defense to establish a high-level interagency task force—the Counter-Piracy Steering Group—to coordinate, implement, and monitor the actions contained in the plan. In addition, the NSC directed that DOD, DHS, DOJ, State, DOT, the Treasury, and the Office of the Director of National Intelligence undertake coordinated initiatives in accordance with the plan, subject to available resources. Piracy activity off of the Horn of Africa has declined as indicated by the number of incidents reported, the number of hostages taken, and the amount of money paid in ransoms in 2013 as compared with recent years. In September 2010, we reported that successful and attempted piracy attacks off the Horn of Africa had risen from 30 in 2007 to 218 in 2009.incidents from ship owners and operators, shows that the number of Our analysis of data provided by the IMB, which collects reported piracy incidents continued to rise to 235 in 2011, but declined thereafter to 15 total incidents in 2013, as shown in figure 3. At the same time, the number of hostages taken during pirate attacks rose from 815 in 2008 to 1,016 in 2010, but declined to 34 in 2013, as shown in figure 4. As the number of hostages taken during piracy incidents rose, the amount of ransom money collected by pirates also increased. According to the UN Office on Drugs and Crime and the World Bank, low estimates of the total dollar amount of ransoms paid to free hostages rose from $2.4 million in 2007 to $151.1 million in 2011 but declined to $36.4 million in 2012. While ransoms paid were an estimated average of $1.2 million in 2007, the estimated average amount rose to $4 million in 2012, as shown in figure 5. According to State Department officials, at of the end of 2013, there were at least 49 hostages from 11 countries held by Somali pirates. The Action Plan establishes the U.S. role in countering piracy as a collaborative one, seeking to involve all countries and shipping-industry partners with an interest in maritime security. DOD, DHS, DOJ, State, DOT, and the Treasury, in collaboration with their international and industry partners, have implemented steps in the Horn of Africa across the three lines of action established in the Action Plan, which are to: (1) prevent piracy attacks by reducing the vulnerability of the maritime domain, (2) disrupt acts of piracy in ways consistent with international law and the rights and responsibilities of coastal and flag states, and (3) ensure that those who commit acts of piracy are held accountable for their actions by facilitating the prosecution of suspected pirates. U.S. agencies, in collaboration with their international and industry partners have taken several steps to deter pirates and reduce the vulnerability of ships transiting off the Horn of Africa. DOD and State officials and representatives from each of eight shipping industry associations we met with emphasized that these prevention efforts work together and described the following as examples of key prevention efforts. Working with Industry: U.S. agencies have worked with industry partners to develop guidance and requirements for implementing counterpiracy efforts. For example, the Coast Guard issued Maritime Security (MARSEC) Directives that provide guidance to owners and operators of U.S. vessels on how to respond to emerging security threats. These directives include practices that help to prevent pirate attacks and require that vessels operating in high risk waters update their vessel security plans to include security protocols for terrorism, piracy, and armed robbery against ships in high risk waters. Among other things, these plans cover the need for enhanced deterrence, surveillance and detection equipment; crew responses if a potential attack is detected or is underway; and coordination with counterpiracy organizations that could be of assistance. The practices are mandated for U.S. flag vessels operating in high risk waters and are also recommended for foreign flag vessels in the Coast Guard’s Port Security Advisories and in the International Maritime Organization’s (IMO) Maritime Safety Committee circulars. Additionally, the Coast Guard and DOT’s Maritime Administration (MARAD) co-chaired Working Group 3 of the Contact Group on Piracy off the Coast of Somalia which focused on industry awareness. Through this working group, practices were developed and enhanced through the Best Management Practices for Protection against Somalia Based Piracy (BMP), developed by the working group’s maritime industry representatives to deter, prevent, and deny incidents of piracy off the Horn of Africa. The BMP was introduced in 2008 as a joint industry strategy and has been updated based on lessons learned from investigated piracy incidents throughout the region. Version 4 of the BMP was issued in August 2011 and recommends 14 specific actions shipping companies can take to mitigate pirate activity while transiting high risk waters off the Horn of Africa. Examples of these ship protection measures include providing additional lookouts during watch periods, enhancing the ship’s physical barrier, and establishing a safe point or secure citadel on the ship to ensure the safety of the crew and vessel during a pirate boarding. Use of the BMP is not mandatory; rather, officials from each of the eight shipping industry associations we interviewed describe the BMP as a tool kit of practices the ship’s master can tailor to the situation and risks that the ship faces. Officials from an insurance industry association we met with stated that its members encourage and consider the implementation of the practices when pricing products based on steps that vessel owners have taken to mitigate risks. Of the various implemented practices, officials from the six U.S. agencies engaged in counterpiracy activities and the eight shipping industry associations we interviewed describe the use of privately contracted armed security personnel on ships as a key factor in reducing the number of piracy incidents off the Horn of Africa. However, each of the eight shipping industry associations we interviewed stated that they do not want armed security teams to become a standard long term practice, primarily because of the hazards involved with the use of force and weapons aboard ships as well as the expense, with an average cost of about $5,000 per day for a four person security team. These officials added that, in comparison, requiring crew to continuously lookout for suspicious activity is a relatively low-cost measure when compared to deploying armed security personnel with a vessel – the burden of which could be too costly for smaller shipping companies. As security costs become a concern and the threat of piracy declines, DOD, EU, and NATO officials expressed concern that some in the shipping industry may seek to reduce the size and qualifications of the security teams as well as the hours they are deployed to protect the ship. Strategic communication: According to officials from DOD’s AFRICOM, strategic communication from Somali radio stations is also an effective method of preventing piracy. These officials stated that the United States and its international partners have supported a partnership with Somali radio stations to bring awareness to the Somali public about the dangers of piracy and acts of abuse that hostages may endure. U.S. efforts to disrupt acts of piracy involve working with international partners to position resources to interdict pirates at sea and prevent the financing of pirates on land. The following were described by DOD, State, Treasury, EU, and NATO as examples of U.S. efforts that are intended to prevent acts of piracy. Maritime coalition operations: DOD, State, industry, EU, and NATO officials cited the presence of international navies in the region as a key factor in interdicting and disrupting pirate activity. Three multinational maritime coalition operations— the Combined Task Force (CTF) 151, EU Naval Forces (EU NAVFOR) Operation Atalanta, and NATO’s Operation Ocean Shield—along with independent deployments from countries outside of NATO and the EU such as China, India, Japan, and South Korea have worked to protect the waters off the Horn of Africa and the Internationally Recommended Transit Corridor (IRTC). U.S. involvement in these activities is primarily through participation in CTF 151 and NATO’s Operation Ocean Shield. DOD and State officials stated that these operations are effective in establishing a protective force in a region that is growing its own capabilities and has allowed the United States to build new partnerships with navies from around the world. U.S. presence: According to Navy officials, while as of 2013 the United States no longer regularly dedicates naval vessels to CTF 151, the U.S. presence plays an important role in fostering the participation of other countries in the task force. Additionally, the Navy may task ships from other missions, such as counternarcotics or counterterrorism, into the task force on a given day or for short periods to respond in an emergency if they are the closest or most appropriate—consistent with the overarching goal of preserving safety of life at sea. The Unites States has regularly provided, from 2010 through 2014, at least one ship in support of NATO’s Operation Ocean Shield counterpiracy mission. As incidents of piracy have declined off the Horn of Africa, the number of steaming days has also declined, as shown in figure 6. Disrupting pirate financing: To help disrupt pirate revenue, the U.S. Treasury is authorized to block financial transactions of known pirate actors through the application of Executive Order 13536 when there is a nexus to U.S. interests.sanctions on individuals providing funds to known pirate actors and can block the transaction if it involves a U.S. financial institution. Officials from an insurance industry association we met with stated that ship owners can carry insurance policies that reimburse companies for ransom paid as a result of pirate attacks. According to Treasury officials, members of the U.S. and international shipping industry initially expressed concerns that the ransom paid and reimbursed by their policies could be prohibited by the executive order. Treasury officials also stated that the order has specific application, is applied on a case-by-case basis, and, as of March 2014, had not been formally applied in response to a potential ransom payment. The Action Plan aims to ensure that those who commit acts of piracy are held accountable for their actions by facilitating the prosecution of suspected pirates, and in appropriate cases prosecuting pirates in the United States. Officials from DOD, State, and DOJ described several examples of how the United States plays a role in making sure pirates are brought to justice. Building law enforcement capabilities: The United States helps expand law enforcement capabilities within the region through two key efforts. The Naval Criminal Investigative Service conducts investigations and has developed a manual that provides recommendations to law enforcement agencies investigating acts of piracy at sea. Last, the United States has contributed to a piracy database administered by INTERPOL that allows law enforcement agencies to access evidence connected to piracy incidents, although U.S. investigations are primarily focused on piracy incidents with a nexus to U.S. interests. Judicial capacity building: U.S. agencies have also provided piracy- related judicial capacity-building assistance to countries in the region, such as Kenya and the Seychelles, for law enforcement and prosecutions. These activities have included establishing regional courts and building prisons in Somalia. Additionally, DOD, DOJ, and State have worked with international partners to ensure that pirates are tried and held accountable for their crimes by facilitating prosecution agreements. As of November 2013, among 22 nations, 1,130 Somali pirates had been detained for trial, were on trial, or had been convicted. U.S. prosecutions: The United States has jurisdiction to prosecute anyone who commits the crime of piracy as defined by the law of nations on the high seas and is later brought to or found in the United States. U.S. government prosecutions have resulted in the conviction of at least 28 Somali pirates since 2010. In 2010, five men from Somalia were convicted of piracy and related offenses by a federal jury in what, according to DOJ officials, is believed to be the first piracy trial conviction in the United States since 1820 and is seen as the first in a series of government prosecutions aimed at slowing the spread of piracy off of Africa. In February 2013, a federal jury found five Somalis guilty of engaging in piracy and other offenses in connection with the attack on the Navy ship the USS Ashland. Additionally, in November 2013, a Somali pirate involved in the shooting of four Americans aboard a yacht off the coast of Somalia during a failed kidnapping attempt was sentenced to 21 life sentences for his role in their deaths. Also, DOD, State, and DOJ officials stated that these prosecutions send a message that piracy carries serious consequences and may serve as a deterrent to others involved in piracy. However, DOJ and State officials told us that, especially in cases where the hijacked vessel or crew has little or no connection to the United States, a more appropriate role for the United States would be to provide technical assistance to other countries in prosecuting pirates. Appendix III provides a summary of the three lines of action and specific activities in the Action Plan. DOD, State, U.S. Coast Guard, DOJ, DOT, and the Treasury attribute the decline in piracy attacks to the collective implementation of these actions. Officials from these agencies noted that the efforts of governments and the industry practices work together to reduce vulnerabilities and prevent attacks. DOD, State, EU, and NATO naval officials as well as officials from the eight shipping industry associations we interviewed cautioned that discontinuing counterpiracy efforts could provide opportunities for piracy to resurge off the Horn of Africa. They stated that piracy off the Horn of Africa is a crime of opportunity driven by economic conditions in Somalia that have not been addressed. They noted that the practices in place have reduced the likelihood of a successful pirate attack by increasing the risk but the capability and motivation of pirates have not changed. The Action Plan was published in December 2008 when piracy off the Horn of Africa was on the rise but has not been updated, as we recommended in 2010, to reflect changing dynamics in piracy, such as industry’s use of armed security teams or the sharp decline in piracy incidents, or to implement recommendations we previously made to include elements of a strategic approach. The Action Plan was developed to identify and implement measures to suppress pirate activity off the Horn of Africa. Its intent was to respond to the growing threat and be mutually supportive of longer-term initiatives aimed at establishing governance, rule of law, security, and economic development in Somalia. In September 2010, we reviewed the Action Plan, which implements the National Strategy for Maritime Security and the Policy for the Repression of Piracy and other Criminal Acts of Violence at Sea as applied to piracy off the Horn of Africa. At that time, we found that the Action Plan had not been revised to reflect adapted piracy tactics and did not designate which agencies should lead or carry out most activities. Additionally, we found that the National Security Council Staff (NSCS) did not fully include characteristics of a strategic approach in the Action Plan, such as measures to evaluate the effectiveness of U.S. resources applied to counterpiracy, the identification of roles and responsibilities, or the cost of U.S. activities relative to the benefits they achieved. As a result, in September 2010 we recommended that the NSCS, in collaboration with the Secretaries of Defense, State, Homeland Security, Transportation, and the Treasury, as well as the Attorney General: (1) reassess and revise the Action Plan to better address evolving conditions off the Horn of Africa and their effect on priorities and plans; (2) identify measures of effectiveness to use in evaluating U.S. counterpiracy efforts; (3) direct the Counter-Piracy Steering Group to identify the costs of U.S. counterpiracy efforts including operational, support, and personnel costs; and assess the benefits, and effectiveness of U.S. counterpiracy activities; and (4) clarify agency roles and responsibilities and develop joint guidance, information-sharing mechanisms, and other means to operate across agency boundaries for implementing key efforts such as strategic communication, disrupting pirate revenue, and facilitating Since we issued our report in 2010, conditions have prosecution. continued to change off the Horn of Africa in many ways since the Action Plan was developed in 2008. However, as of June 2014 the NSCS had not fully implemented the four recommendations from our September 2010 report as summarized in table 2. Our recommendations were made to the National Security Staff which changed its name to the National Security Council Staff pursuant to Executive Order 13657, dated February 10, 2014. Update since 2010 Action Plan not updated. In September 2010, we recommended that the NSCS update the Action Plan because piracy was increasing and pirate tactics were changing. Since that time, conditions have continued to evolve off the Horn of Africa. Industry has made frequent use of embarked armed security teams. An internationally-recognized Somali federal government was established in August 2012 and responsibility for strategic communication was transferred to it. Piracy declined sharply in 2012 and 2013. EU NAVFOR and NATO counterpiracy operations off the Horn of Africa are set to expire by the end of 2016. State officials recognize that an updated Action Plan is needed and have provided input to the NSCS, but as of March 2014 they had not received guidance from the NSCS regarding any changes to counterpiracy plans or efforts. In commenting on a draft of this report, an NSCS official stated that a global action plan is being developed, with a separate annex focusing on the Horn of Africa and was expected to be issued in the summer of 2014. Measures not established to assess counter piracy efforts. In September 2010, we recommended that the NSCS include measures of effectiveness in the Action Plan to provide direction for counterpiracy activities and information that could be used in strategic and resource-based decisions. During the course of this review, State officials told us the key measures are the number of hostages and ships hijacked, but they have not established formal measures and their decisions are generally guided by discussions rather than formal assessments. However, this information does not provide insight into which efforts are having the greatest effect in suppressing piracy. U.S. counterpiracy costs and benefits not fully tracked. In September 2010, we reported that the United States is not collecting information to determine the most cost-effective mix of counterpiracy activities. During the course of this review, we obtained information from agencies identifying some costs related to their counterpiracy efforts. For example, the costs of counterpiracy efforts incurred by DOD peaked in 2011 at approximately $275 million but have declined to approximately $70 million in 2013. State tracks funds used to operate its counterpiracy and maritime security functions, as well as foreign assistance provided to African countries. However, most agencies do not systematically track the costs of counterpiracy efforts or activities because these efforts and activities typically fall under a broader maritime security category. Further, the Counter-Piracy Steering Group has not identified the benefits of the various counterpiracy activities relative to their costs and resources. Agency roles and responsibilities defined for some tasks. In September 2010, we reported that agencies had made less progress in implementing action items in the Action Plan that involved multiple agencies than those that were the responsibility of one specific agency. Since that time, U.S. agencies have defined roles and responsibilities for applying the Maritime Operational Threat Response (MOTR) process to piracy incidents involving U.S. interests. DOJ officials stated that the NSCS has also identified roles and responsibilities for transporting pirate suspects for prosecution. However, the NSCS has not established roles and responsibilities across all activities outlined in the Action Plan. Plan contains operational coordination requirements to ensure quick and decisive action to counter maritime threats. In commenting on a draft of this report, an NSCS official stated that the Action Plan is being updated through a global action plan, with a separate annex focusing on the Horn of Africa, but did not indicate whether the plan would include all of the elements in our recommendations. We continue to believe our recommendations have merit and should be implemented. While conditions affecting piracy have continued to evolve in the Horn of Africa since 2010, the 2008 Action Plan continues to guide U.S. efforts. Officials from each of the six agencies engaged in counterpiracy activities noted that current efforts are suppressing piracy off the Horn of Africa, but the results are tenuous and piracy could resurge without addressing its root causes. The Action Plan was developed at a time when U.S. policy focused on addressing problems in the absence of a functioning government in Somalia and without involving a U.S. presence in the country. With U.S. agencies and industry both having limited resources available for counterpiracy activities, we continue to believe that implementing our recommendations would be of value in understanding the costs and benefits and measuring the effectiveness of U.S. counterpiracy efforts. DOD, Coast Guard, DOJ, and State officials, as well as shipping industry officials, noted that the suppression of piracy has been based on a combination of government and industry counterpiracy activities, particularly the use of armed security teams on private vessels and the presence of naval patrols. However, U.S. agencies do not assess how industry practices and government resources could potentially offset each other’s roles and associated costs. As we concluded in September 2010, in an environment where government resource decisions directly affect costs incurred by the shipping industry and international partners, balancing risk reduction and benefits with costs should be emphasized. Piracy and maritime crime, primarily armed robbery at sea, oil theft, and kidnapping, is a persistent problem that continues to contribute to instability in the Gulf of Guinea. According to ONI data, incidents of piracy and maritime crime in the Gulf of Guinea rose from nearly 60 in 2010 to over 100 in 2011, and totaled more than 110 in 2013, as shown in figure 7. According to this data, incidents in 2013 included 11 vessel hijackings and 32 kidnappings. According to officials from AFRICOM, ONI, State, and the IMO, this recent rise in piracy and maritime crime in the Gulf of Guinea is part of a long-standing, persistent problem in the region. For example, according to DOD officials, the Gulf of Guinea was the most active region in the world for piracy in 2007, prior to the rise in pirate activity off the Horn of Africa. According to the IMB, the number of vessel- reported incidents in the Gulf of Guinea from 2007 through 2009 is similar to that of 2011 through 2013. IMO officials added that, while the reported incidents indicate an ongoing, persistent problem, the number and frequency of incidents do not yet rise to the epidemic proportions that were seen off the Horn of Africa. According to the U.S. Strategy to Combat Transnational Organized Crime and information from the U.S. Energy Information Administration, as well as the UN Security Council, piracy and maritime crime pose a threat to regional commerce and stability in the Gulf of Guinea. For example, according to the U.S. Energy Information Administration, while Nigeria has the second largest amount of proven crude oil reserves in Africa, as of December 2013, exploration activity there was at its lowest levels in a decade as a result of rising security problems related to oil theft, onshore pipeline sabotage, and piracy and maritime crime in the Gulf of Guinea, as well as other investment and government uncertainties. Moreover, incident data since 2010 shows that piracy is moving farther off shore, prompting concerns that these trends may continue. According to officials from AFRICOM, ONI, and State, and according to IMB data as shown in figure 2 of this report, Gulf of Guinea piracy and maritime crime prior to 2011 have generally occurred in the coastal areas near Lagos or off the Niger Delta. However, recent attacks have taken place farther away from the waters off Nigeria, demonstrating a broader reach of pirates, as well as increasing the number of coastal states involved. For example, since 2011, several tanker hijackings were reported farther west than previously observed, off Togo and Cote d’Ivoire, according to ONI officials. Further, a July 2013 tanker hijacking off the coast of Gabon, and a similar incident off Angola in January 2014 represent, as of March 2014, the southernmost occurrences in which vessels were hijacked and sailed to Nigeria to offload the stolen oil cargo. According to AFRICOM officials, the ability to conduct such hijackings, which involve difficult maneuvering of large vessels across swaths of open water while conducting oil bunkering operations, illustrates that these maritime criminals may be increasingly capable of complex and long-range operations. In the context of this report and data reported by ONI and IMB, kidnappings refer to those that have occurred, or were reported to have occurred. According to ONI and AFRICOM officials, such incidents would include scenarios in which oil industry personnel or others were kidnapped from offshore supply vessels or platforms and held for ransom, such as the case of the two U.S. oil industry personnel taken from the C-Retriever in October 2013 off the coast of Nigeria. However, according to AFRICOM and Naval Forces–Africa officials, kidnappings conducted against the oil industry, including those perpetrated by Nigerian militants over the last decade, also include onshore kidnappings, or kidnappings within the inland waters and river ways of the Niger Delta. Onshore or inland kidnappings are generally not included in this data, and ONI officials said they take steps to validate the data they report However, ONI officials told us that some self- reported or other data may unintentionally include such incidents. the Horn of Africa, MARAD, State, and all eight of the shipping industry association officials we interviewed expressed that the increasing prevalence of kidnappings is a cause for concern. According to AFRICOM officials, the objective of building partner capacity in the Gulf of Guinea, including strengthening maritime security, has long been part of U.S. military and diplomatic efforts in the region, even though the United States and international partners do not generally conduct naval patrols such as those conducted off the Horn of Africa. For example, AFRICOM has conducted training and other efforts to strengthen regional security, including combating piracy and maritime crime, since its creation in 2008. According to AFRICOM, State, and U.S. Coast Guard Officials, while U.S. efforts in the Gulf of Guinea are informed by the region’s specific geopolitical context, they also include efforts aimed to improve the prevention, disruption, and prosecution of piracy and maritime crime. According to State and DOD officials, providing a permanent U.S. or international interdiction presence in the region is impractical because foreign nations do not have the authority to conduct military operations in another sovereign nation’s territory and the need for limited naval resources to address other strategic priorities. However, as in the Horn of Africa, a variety of U.S. efforts are underway to help prevent acts of piracy and maritime crime in the Gulf of Guinea, including in the following areas: Coordination of international activities and assistance: According to DOD and State officials, facilitating collaboration and avoiding duplication is important to U.S. and international partners. To help achieve this, and in recognition of increasing concern in the region, an ad hoc Group of Eight (G8) group called the G8++ Friends of the Gulf of Guinea was established to conduct high-level coordination and discussion of international assistance efforts. Further, State and AFRICOM officials said that as part of their planning process, AFRICOM holds planning conferences to solicit input from international partners, coordinate activities, and leverage resources. All U.S. officials we spoke with agreed that while the establishment of the Contact Group for the Horn of Africa was helpful in the absence of a functioning Somali government, in the case of the Gulf of Guinea, solutions must emerge from the region itself, and the role of the international community is to support and promote African-led initiatives. For example, the U.S., through DOD and State, has supported and facilitated the efforts of the two relevant African economic communities—the Economic Community of West African States (ECOWAS), and the Economic Community of Central African States (ECCAS)—to develop and lead efforts to prevent and suppress piracy. For example, according to AFRICOM and State officials, AFRICOM and respective U.S. embassies supported the recent development of a code of conduct concerning the prevention of piracy, armed robbery, and other maritime crime, which was signed in June 2013 by leaders of the Gulf of Guinea coastal states. Security advisories for U.S. vessels and ship protection measures: MARAD provides security advisories to alert U.S. vessel operators transiting all over the world, and in August 2008, MARAD issued a maritime advisory warning of piracy and criminal activity against oil industry and other vessels by Niger Delta militants in Nigerian territorial waters. Additionally, in August 2010, MARAD warned that vessels operating near oil platforms in Nigerian waters were at high risk of armed attacks and hostage taking, and advised vessels to act in accordance with Coast Guard directives on security plans and risk assessments. Further, in March 2012, shipping industry organizations in coordination with NATO issued interim guidelines for protection against piracy in the Gulf of Guinea as a companion to their August 2011 BMP version 4 for the Horn of Africa region. Most recently, in July 2013, the U.S. Coast Guard directed U.S. vessels to revise their ship security plans and protective measures in response to continued attacks and lessons learned from investigations of recent incidents, including hijacking tankers for oil theft, acts of robbery, and kidnapping for ransom of vessel masters and officers from offshore oil exploration support vessels. Unlike off the coast of Somalia, where agreements authorize international forces, including the United States, to disrupt pirate attacks in territorial waters and dismantle pirate bases ashore, every Gulf of Guinea country possesses the sovereign rights to control its maritime and land borders. Accordingly, the U.S. role and the majority of its efforts pertain to training, security assistance, and coordination, including the following activities: Bilateral equipment and training assistance to navies and coast guards: According to IMO, DOD, and State officials, the development of regional countries’ naval capabilities is critical to successfully fighting piracy and maritime crime in the Gulf of Guinea. Further, DOD officials told us that regional navies have either nascent or insufficient national maritime forces to independently combat the crime that occurs off their coasts, let alone that which may occur farther out to sea. To increase capabilities for regional maritime forces, State, in coordination with DOD, provides bilateral assistance and training to countries in the region. This includes approximately $8.5 million since 2010 in equipment and related training (e.g., vessels, engines, and maintenance training and parts) provided to countries in the greater Gulf of Guinea region to help build their maritime forces, according to State officials. Additionally, according to State budget documents, since 2010, State has used its Africa Maritime Security Initiative to provide regional maritime security training and support through the DOD’s Africa Partnership Station and requested $2 million for this effort in fiscal years 2013 and 2014. Training exercises to strengthen regional response capabilities: In addition to equipment and training to build countries’ maritime forces, AFRICOM and its naval component, U.S. Naval Forces – Africa provide multilateral training to improve regional maritime security operations capability, such as navy-to-navy exercises focused on maritime interdiction operations and response. For example, the annual Obangame Express exercise is a multi-country, multi-fleet exercise that implements various scenarios over several days. Begun as a proof of concept in 2010 with limited countries and vessels involved, the objectives of Obangame Express conducted in February 2013, according to AFRICOM, were focused on information sharing and interoperability among 10 Gulf of Guinea countries, the ECCAS Combined Maritime Center, ECOWAS, as well as the United States and 4 international partners. The exercises involve combating and responding to various scenarios including oil bunkering, trafficking illegal cargo, illegal fishing, and piracy, and AFRICOM officials stated that future exercises already have commitments of expanded international and regional participation. According to the Action Plan, facilitating the prosecution and detention of pirates off the Horn of Africa is a central element of U.S. efforts to combat piracy in the region. However, as previously noted, the majority of Gulf of Guinea maritime crimes occur within the territorial waters of one or more country and as a result are under their legal jurisdiction. As such, the U.S. role in prosecuting suspected criminals, like its role in prevention and disruption of attacks, is one of support and capacity building, such as the following efforts: Maritime law enforcement training and prosecution: According to DOD, State, and U.S. Coast Guard officials, much of the training the United States provides to maritime law enforcement in the Gulf of Guinea is similar to that provided in the Horn of Africa, and is used to combat a variety of crimes, such as narcotics trafficking, arms smuggling, human trafficking, and illegal fishing, as well as piracy. For instance, in West Africa, AFRICOM and the Coast Guard provide training including visit, board, search, and seizure skills and mentorship through the African Maritime Law Enforcement Partnership (AMLEP) program, which aims to strengthen countries’ abilities to enforce their maritime laws. AMLEP targets illicit trafficking in drugs, arms, and humans, as well as counterpiracy issues and illegal fishing, and the program has resulted in the successful seizure and prosecution of illegal fisherman by African law enforcement officers in African waters, according to AFRICOM officials. Judicial capacity building: State’s Bureau of International Narcotics and Law Enforcement (INL) has conducted a series of regional maritime criminal justice seminars. Specifically, INL and the Africa Center for Strategic Studies have hosted a series of Trans-Atlantic Maritime Criminal Justice Workshops, which provide an opportunity for regional law enforcement agencies to learn about maritime crime and related gaps in their judicial systems. This series of conferences included a June 2013 session for ECOWAS countries in Ghana, with the other conferences held in February 2013 in Cape Verde and February 2014 in Benin. Additionally, according to State officials, in 2013 the agency began discussions with the G8++ Friends of Gulf of Guinea to develop possible future U.S. programs to strengthen regional countries’ capacity to investigate and prosecute cases of armed robbery at sea and piracy. According to DOD and State officials, U.S. efforts to combat piracy and maritime crime in the Gulf of Guinea are guided by the same over-arching U.S. policies and security goals as the efforts to combat piracy off of the Horn of Africa. These policies include the 2007 Policy for the Repression of Piracy and other Criminal Acts of Violence at Sea, the 2011 Strategy to Combat Transnational Organized Crime, the 2012 Strategy toward Sub- Saharan Africa, the 2012 National Strategy for Maritime Security, and the 2013 National Maritime Domain Awareness Plan. For example, the Strategy to Combat Transnational Organized Crime outlines East and West African maritime security as regional priorities, specifically noting incidents of Somali piracy and oil theft and kidnapping of oil workers in the Gulf of Guinea. DOD and State officials emphasized that U.S. efforts are then developed in consideration of the particular contexts of each region. In the case of Somalia, the surge and intensity of the rising piracy problem, the specific nature of the crime, and the absence of a functioning government presented a crisis that warranted collective international action, as well as a U.S. plan to guide its contribution to this response. Alternatively, State, DOD, and Coast Guard officials we spoke with explained that because the context of maritime crime in the Gulf of Guinea, and thereby the U.S. efforts there, encompass a broader set of geopolitical issues and maritime crimes, creating a piracy-focused plan similar to the Action Plan in the Horn of Africa may not be appropriate. While there is not a whole-of-government plan to guide maritime security efforts in the Gulf of Guinea, DOD, State, Coast Guard, and others continue to expand and coordinate their maritime security activities there, which range from individual boarding team trainings to broad judicial sector reform. DOD and State officials told us that as the United States and international partners look to expand efforts in the Gulf of Guinea, coordinating activities to achieve the most effective mix and efficient use of resources is increasingly important. For example, officials from U.S. Naval Forces – Africa stated that occasional duplication of training activities can happen, particularly as international partners increase their attention to the region. However, according to officials from the U.S. government agencies working in the region, the NSCS has not directed them to conduct a collective assessment of efforts to combat piracy and maritime crime that weighs the U.S. security interests, goals, and resources in the region against the various types of agency and international activities underway. Moreover, while individual agencies have conducted analysis regarding the incidents of piracy and maritime crime such as armed robbery and kidnapping in the region, there has not been a coordinated interagency appraisal of how the variety of existing and planned activities address U.S. policy objectives in the context of such a broad set of maritime crimes, from illegal fishing and oil theft, to arms trafficking and kidnapping of U.S. citizens from offshore supply vessels. The National Maritime Domain Awareness Plan cites the importance of understanding new and emerging maritime challenges in the maritime domain, the development of solutions to address those challenges, and continuous reassessment using risk management principles. Further, the Strategy to Combat Transnational Organized Crime outlines a specific set of U.S. priority actions to combat transnational criminal threats such as piracy and maritime crime, one of which is to increase research, data collection, and analysis to assess the scope and impact of such crime and the most effective means to combat it. Individual agencies may include some assessment information into their planning and evaluation processes but this information is specific to agencies and programs rather than the overall U.S. effort. For example, according to AFRICOM officials, it uses available information to assess the operating environment, such as demographic surveys, to develop indicators to help measure program effectiveness, or it may conduct an assessment of a partner country’s naval capabilities to inform program design. However, the chief of the AFRICOM assessments directorate said this assessment process is relatively new for the command, and there are no known interagency efforts to leverage this information into a broader assessment of U.S. maritime security or counterpiracy efforts. Additionally, according to State officials from the Political-Military Affairs Bureau, while individual programs such as State’s foreign military financing or other security assistance activities may conduct evaluations of their programs, it is not part of a broader assessment of State’s regional maritime security activities. Program guidance for other multi-agency international collaborative efforts—such as providing counternarcotics assistance to countries to disrupt drug production and trafficking—has also shown that assessing agencies’ progress in meeting established goals can provide better information for decision making. Guidance for these efforts demonstrates how incorporating elements of a strategic approach such as evaluating performance measures and setting performance targets can provide oversight and guide management decisions about the allotment of program resources. If a multi-agency collaborative plan, such as the Action Plan, was developed for the efforts that address piracy and maritime crime in the Gulf of Guinea, including elements of a strategic approach could help determine the best use of resources to meet its objectives. An assessment that identifies the various U.S. and international efforts underway to strengthen maritime security, examines the relationship of these efforts with the nature and scope of the problem in the region, and considers the geopolitical environment and other regional factors could help strengthen ongoing efforts to combat maritime crime, as well as inform the appropriate mix of activities in order to use resources most effectively. Further, such an assessment could help determine whether additional actions, such as developing an action plan or other guidance, is needed to align U.S. interagency efforts to better achieve national security goals. In commenting on a draft of this report, an NSCS official stated that a global action plan is being developed, with a separate annex focusing on the Gulf of Guinea, but did not indicate the extent to which the plan was based on an assessment of ongoing activities or would include elements of a strategic approach. Since our September 2010 report on piracy off the Horn of Africa, the U.S. Government—as part of an international partnership—has continued to take steps outlined in the Action Plan to counter piracy. In 2013, piracy steeply declined off the Horn of Africa, but the gains are tenuous and piracy could easily resurge if the international coalition becomes complacent. Whether piracy incidents are rising or declining, it is important for the Action Plan to be updated to account for current circumstances. In addition, our current work indicates that the U.S. Government has not implemented additional steps we recommended to identify measures of effectiveness, identify costs and benefits, and clarify agency roles and responsibilities. We are not making any new recommendations regarding the Action Plan for the Horn of Africa, but we continue to believe that our 2010 recommendations remain relevant to the changing conditions, and acting on these recommendations would assist the NSCS—and DOD and State as the co-chairs of the Counter-Piracy Steering Group—in better assessing, planning, and implementing actions to counter piracy as it continues to evolve, and would help ensure that recent progress is sustained. Meanwhile, piracy and maritime crime in the Gulf of Guinea has escalated and in 2013 surpassed the Horn of Africa in terms of incidents. The variety of U.S. efforts by multiple government agencies to combat piracy in the region highlights the importance of having coordinated activities that combine the most effective mix of resources. Without a collective assessment of the scope and nature of the problem of piracy and maritime crime, particularly in the Gulf of Guinea where no such collective assessment has occurred, the U.S. may not be coordinating its efforts in the most effective or cost efficient manner. An assessment of the various U.S. and international efforts, as well as of the geopolitical environment and other regional factors could help determine what additional actions are needed to align all of the efforts underway. Furthermore, an assessment of whether and to what extent such actions, such as developing an action plan that would include elements of a strategic approach, is needed can guide decision making to address the evolving threat, coordinate resources and efforts, and prioritize maritime security activities in the Gulf of Guinea. To help ensure that efforts to counter piracy and maritime crime are coordinated and prioritized to effectively address the evolving threat, we recommend that the Assistant to the President for National Security Affairs, in collaboration with the Secretaries of Defense and State, work through the Counter-Piracy Steering Group or otherwise collaborate with the Secretaries of Homeland Security, Transportation, and the Treasury, and the Attorney General to conduct an assessment of U.S. efforts to address piracy and maritime crime in the Gulf of Guinea to inform these efforts and determine whether additional actions to address counterpiracy and maritime security, such as developing an action plan that includes elements of a strategic approach, are needed to guide and coordinate activities. We provided a draft of this report to DOD, DHS, DOJ, State, DOT, Treasury, and the NSCS for review and comment. DHS, DOJ, DOT, and Treasury, did not provide official comments on our draft report and DOD and State deferred to the NSCS for comments on the recommendations. In an email from the NSCS dated June 12, 2014, the NSCS did not concur or non-concur with our recommendations, but provided information related to its current counterpiracy efforts. Specifically, the NSCS stated that it is coordinating with departments and agencies through the interagency process to develop a global action plan for countering piracy, with separate annexes focusing on the Horn of Africa and the Gulf of Guinea. The updated plan will provide guidance to the federal government focusing on three core areas including: prevention of attacks, response to acts of maritime crime, and enhancing maritime security and governance. The plan will be forthcoming in the summer of 2014 and the Executive Branch will continue to evaluate maritime crime around the world and develop or refine guidance to account for evolving conditions in specific regions. We are encouraged by the steps being taken by the NSCS in providing the federal agencies responsible for counterpiracy activities with an updated plan, but it is not clear to what extent the plan will include previously recommended elements of a strategic approach. The description of the plan appears to provide a needed update to the Action Plan given the changes in conditions off of the Horn of Africa. The updated plan also appears to be responsive to part of our recommendation to consider additional actions such as developing a similar plan for the Gulf of Guinea. However, the description of the plan does not address the extent to which it will include elements such as an assessment of costs and benefits, measures of effectiveness to evaluate counterpiracy efforts, and defined roles and responsibilities for the agencies involved in carrying out counterpiracy activities. Further, the description does not address the extent to which the updated plan is based on an assessment of ongoing counterpiracy activities in the Gulf of Guinea. We will monitor the situation and review the updated plan once it is released and will continue to monitor the NSCS’s progress in planning and providing guidance for counterpiracy activities as well as DOD and State’s progress in implementing the plan as co-chairs of the Counter Piracy Steering Group. DOD, DHS, and DOJ provided technical comments on a draft of this report which have been incorporated as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the Assistant to the President for National Security Affairs; the Attorney General; the Secretaries of Defense, Homeland Security, State, Transportation, and the Treasury; and other interested parties. In addition, the report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact either Stephen L. Caldwell at (202) 512-9610 or CaldwellS@gao.gov or Chris P. Currie at (404) 679-1875 or CurrieC@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are found in Appendix V. This report assesses how piracy off the Horn of Africa has changed since 2010 and describes U.S. efforts to assess its counterpiracy actions, given any changing conditions; and identifies trends in piracy and maritime crime in the Gulf of Guinea and U.S. efforts to address them and evaluates the extent to which the United States has assessed its counterpiracy efforts in the Gulf of Guinea. To assess how piracy off the Horn of Africa has changed since 2010, we analyzed data from the International Chamber of Commerce’s International Maritime Bureau (IMB) and the U.S. Office of Naval intelligence (ONI) on reported piracy incidents, hostages taken, and ransoms paid off the Horn of Africa from 2008 through 2013. We discussed data-collection methods, processes for data entry, and the steps taken to ensure the reliability of the data with both IMB and ONI officials. We collected information from both IMB and ONI on their processes for quality control, data verification, and how potential errors are identified and corrected. We also discussed variation between IMB and ONI data with officials from ONI and other Department of Defense (DOD) organizations, the Department of State (State), and IMB, who attributed differences in categorization of incidents, validation of sources, and geographic scope to the variation. Officials stated that while values between ONI and IMB data may differ, IMB is a generally accepted data source for tracking global piracy incidents and suitably reflects general We determined the data to be sufficiently reliable for the historical trends.purposes of describing the context, trends, and scope of pirate attacks off the Horn of Africa in this report. In addition, we met with U.S. agency officials, international partners, and representatives from insurance, shipping, and private security industry associations to discuss their involvement in counterpiracy activities such as developing best practices for protecting ships from pirate attack, working with the international Contact Group on Piracy off the Coast of Somalia, and participating in naval patrols off the Horn of Africa. We met with officials from shipping industry associations that represent owners and operators from over 80 percent of the world’s merchant fleet and provide a unified industry voice in the creation of industry policy and strategy; insurance industry associations whose members cover approximately 90 percent of the world’s ocean-going tonnage; and a private security industry association that has over 180 members across 35 countries. While the statements of these industry officials cannot be generalized to the entire industries they represent, their perspectives provide valuable insight since each is actively involved in international collaborative efforts to combat piracy. To determine the extent to which the U.S. has assessed its counterpiracy actions as outlined in the 2008 Countering Piracy off the Horn of Africa: Partnership and Action Plan (Action Plan), we reviewed the Action Plan, the 2005 National Strategy for Maritime Security, the 2007 Policy for the Repression of Piracy and other Criminal Acts of Violence at Sea, relevant U.S. policies and laws, and United Nations Security Council resolutions. We also reviewed program documents including briefings and meeting summaries and interviewed officials from DOD, State, and the Departments of Homeland Security (DHS), Justice (DOJ), Transportation (DOT), and the Treasury, including components such as U.S. Naval Forces – Central Command, the U.S. Coast Guard, and the Federal Bureau of Investigation (FBI), to discuss implementation of the Action Plan and the status of our 2010 recommendations to improve the plan.We selected these departments and agencies because the Action Plan states they shall contribute to, coordinate, and undertake initiatives in accordance with the plan. To identify trends in piracy and maritime crime in the Gulf of Guinea we analyzed IMB data on actual and attempted piracy incidents from 2007 through 2013, and ONI data from 2010 through 2013. As with the IMB and ONI data pertaining to the Horn of Africa, we collected information on the quality control, verification, and safeguards from error and discussed the reliability of the data with officials from IMB, ONI, and State officials involved in maritime security initiatives in the Gulf of Guinea and determined the data to be sufficiently reliable for the purposes of this report. Because ONI data on the Gulf of Guinea is unavailable prior to 2010 we chose to present the IMB data from 2007 through 2013 alongside the ONI data to show trends over a broader period. In addition to data, we reviewed publicly available reports and documents regarding maritime security and piracy in the Gulf of Guinea from the United Nations Office on Drugs and Crime, the EU, and other multilateral and nongovernmental organizations. To evaluate U.S. efforts to address piracy and maritime crime in the Gulf of Guinea, as well as the extent to which the U.S. has assessed the need for a strategic approach for the region, we reviewed relevant U.S. and international policies and laws, such as the 2005 National Strategy for Maritime Security, 2007 Policy for the Repression of Piracy and other Criminal Acts of Violence at Sea, the 2008 Action Plan, the 2012 Strategy toward Sub-Saharan Africa, and United Nations Security Council resolutions pertaining to the Gulf of Guinea. We also compared agency efforts with U.S. policy priorities and requirements for conducting assessments outlined in the 2011 Strategy to Combat Transnational Organized Crime and the 2013 National Maritime Domain Awareness Plan, documents that guide U.S. maritime security efforts, including in the Gulf of Guinea. Office of the General Counsel; Office of the Undersecretary of Defense for Policy, Office of the Deputy Assistant Secretary of Defense for Counter-Narcotics and Global Threats; and the Joint Staff J5 Strategic Plans and Policy Directorate Department of the Navy, as well as the Naval Criminal Investigative Service, and the Office of Naval Intelligence U.S. Africa Command (Germany) and its components U.S. Naval Forces – Africa and Combined Joint Task Force – Horn of Africa U.S. Central Command and its component U.S. Naval Forces – Central Command (Bahrain) U.S. Coast Guard offices, including the National Defense Strategy Division, Maritime Security (Counterterrorism) Division, Office of International Affairs and Foreign Policy; Office of Commercial Vessel Compliance; Office of Budget and Programs; and the Intelligence Coordination Center U.S. Coast Guard representatives at other agencies, including U.S. Africa Command (Germany) and Patrol Forces Southwest Asia (Bahrain) European Union Naval Forces (United Kingdom) Combined Maritime Forces (Bahrain) Shared Awareness and Deconfliction Meeting (observed in Bahrain) North Atlantic Treaty Organization (NATO) and the NATO Shipping International Maritime Organization (United Kingdom) Centre (United Kingdom) United Kingdom Foreign & Commonwealth Office (United Kingdom) Baltic and International Maritime Council (BIMCO) Center for Strategic and International Studies Chamber of Shipping of America Lloyd’s Market Association Maersk Line, Limited Oceans Beyond Piracy Oil Companies International Marine Forum (OCIMF) Royal Institute of International Affairs (Chatham House) Security Association for the Maritime Industry (SAMI) Society of International Gas Tanker and Terminal Operators Limited International Association of Dry Cargo Shipowners (INTERCARGO) International Association of Independent Tanker Owners (INTERTANKO) International Chamber of Shipping International Group of P&I Clubs International Maritime Bureau International Transport Workers’ Federation (ITF) Japanese Shipowners’ Association (SIGTTO) We conducted this performance audit from June 2013 to June 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In September 2010, we assessed the counterpiracy efforts of the United States government against the lines of action identified in the Countering Piracy off the Horn of Africa: Partnership and Action Plan (Action Plan). These lines of action continue to guide the United States’ efforts off the Horn of Africa. Table 3 summarizes our assessments from our September 2010 report and also provides updated information for each action since that time. During the course of our review, the Department of Defense provided information on the costs of its counterpiracy efforts as shown in table 4. In addition to the contacts above, Dawn Hoff, Assistant Director; Suzanne Wren, Assistant Director; Jason Bair; Charles Bausell; Jennifer Cheung; Toni Gillich; Eric Hauswirth; Kevin Heinz; Stan Kostyla; Landis Lindsey; Tom Lombardi; John Mingus; Jessica Orr; Matt Spiers; and Sally Williamson made key contributions to this report. Defense Headquarters: DOD Needs to Periodically Review and Improve Visibility of Combatant Commands’ Resources. GAO-13-293. Washington, D.C.: May 15, 2013. Building Partner Capacity: Key Practices to Effectively Manage Department of Defense Efforts to Promote Security Cooperation. GAO-13-335T. Washington, D.C.: February 14, 2013. Maritime Security: Progress Made, but Further Actions Needed to Secure the Maritime Energy Supply. GAO-11-883T. Washington, D.C.: August 24, 2011. Intelligence, Surveillance, and Reconnaissance: DOD Needs a Strategic, Risk-Based Approach to Enhance Its Maritime Domain Awareness. GAO-11-621. Washington, D.C.: June, 20, 2011. Maritime Security: Updating U.S. Counterpiracy Action Plan Gains Urgency as Piracy Escalates off the Horn of Africa. GAO-11-449T. Washington, D.C.: March 15, 2011. Maritime Security: Actions Needed to Assess and Update Plan and Enhance Collaboration among Partners Involved in Countering Piracy off the Horn of Africa. GAO-10-856. Washington, D.C.: September 24, 2010. Defense Management: Improved Planning, Training, and Interagency Collaboration Could Strengthen DOD’s Efforts in Africa. GAO-10-794. Washington, D.C.: July 28, 2010. Defense Management: DOD Needs to Determine the Future of Its Horn of Africa Task Force. GAO-10-504. Washington, D.C.: April 15, 2010. Maritime Security: National Strategy and Supporting Plans Were Generally Well-Developed and Are Being Implemented. GAO-08-672. Washington, D.C.: June 20, 2008. Maritime Security: Federal Efforts Needed to Address Challenges in Preventing and Responding to Terrorist Attacks on Energy Commodity Tankers. GAO-08-141. Washington, D.C.: December 10, 2007.
Piracy and maritime crime continues to threaten ships off the Horn of Africa's east coast and in the Gulf of Guinea off Africa's west coast, putting seafarers in harm's way and costing governments and industry billions of dollars in ransom, insurance, and protective measures. The types and causes of piracy and maritime crime, as well as the African states' ability to address the problem in the two regions, differ. To help U.S. agencies coordinate efforts, the NSCS developed an interagency plan in 2008 to prevent, disrupt, and prosecute piracy off the Horn of Africa in collaboration with industry and international partners. GAO was asked to evaluate U.S. counterpiracy activities. This report: (1) assesses how piracy off the Horn of Africa has changed since our 2010 review, and describes U.S. efforts to assess its counterpiracy actions, given any changing conditions; and (2) identifies trends in piracy and maritime crime in the Gulf of Guinea and U.S. efforts to address them, and evaluates the extent to which the United States has assessed its counterpiracy efforts in the Gulf of Guinea. GAO reviewed plans, activities, and data from 2007 through 2013 and interviewed officials from U.S. agencies, international partners, and industry, selected as a nongeneralizable sample for their involvement in counterpiracy activities. Piracy incidents off the Horn of Africa's east coast near Somalia have declined sharply since 2010, but U.S. agencies have not assessed their counterpiracy efforts as GAO recommended in 2010. Since 2010, the International Maritime Bureau (IMB) reports piracy incidents declined from 219 to 15 in 2013. Similarly, from 2010 to 2013 hostages taken by pirates declined from 1,016 to 34. Also, a World Bank report stated that total ransoms declined by 2012. Officials participating in counterpiracy activities from the Departments of Defense and State, among others, as well as shipping industry officials and international partners, attribute the decline to a combination of prevention, disruption, and prosecution activities. However, officials cautioned that this progress is tenuous, and discontinuing these efforts could allow piracy to resurge. Despite changing conditions, U.S. agencies have not systematically assessed the costs and benefits of their counterpiracy efforts. Agency officials stated that their decisions and actions are guided by discussions rather than formal assessments. GAO has previously noted that assessments of risk and effectiveness in an interagency environment can strengthen strategies and resource usage. As such, GAO's prior recommendations remain valid and could help U.S. agencies identify the most cost effective mix of efforts and prioritize activities as they respond to changing conditions and fiscal pressures while avoiding a resurgence in piracy. Off the west coast of Africa, piracy and maritime crime has been a persistent problem in the Gulf of Guinea, as shown in the figure below. Although the United States has interagency and international efforts underway with African states to strengthen maritime security, it has not assessed its efforts or the need for a collective plan to address the evolving problem in the region. The U.S. role in addressing piracy in the Gulf of Guinea has focused on prevention, disruption, and prosecution, through training and assistance to African coastal states. However, according to U.S. agencies working in the region, the National Security Council Staff (NSCS) has not directed them to collectively assess their efforts to address piracy and maritime crime. An assessment of agencies' Gulf of Guinea efforts could strengthen their approach by informing the appropriate mix of activities to achieve the most effective use of limited resources, as well as help determine if additional actions are needed. Reported Incidents of Piracy and Maritime Crime, 2008 through 2013 GAO recommends that the NSCS, with the Secretaries of Defense and State, collaborate with the involved agencies to assess their efforts and to determine whether additional actions are needed to guide efforts in the Gulf of Guinea. The NSCS did not concur or non-concur with GAO's recommendations but provided an update on its planning activities.
The Results Act requires that strategic plans include six broad elements—mission statements, general goals and objectives, strategies for achieving goals, a description of the relationship between general goals and annual performance goals, key external factors, and a description of the actual use and planned use of program evaluations. When we reviewed the draft plans that 27 agencies provided to Congress for consultation, we found that all but six of the plans were missing at least one required element and that about a third were missing two of the six required elements. In addition, just over a fourth of the plans failed to cover at least three of the required elements. Moreover, we found that many of the elements that the plans included contained weaknesses—some that were more significant than others. We noted in our September report that complete strategic plans were crucial if they were to serve as a basis for guiding agencies’ operations and help congressional and other policymakers make decisions about activities and programs. On the basis of our preliminary reviews of major agencies’ September plans, it appears that, on the whole, the agencies made a concerted effort during August and September to improve their plans. For example, all of the September plans we reviewed contained at least some discussion of each element required by the Act. And, in many cases, those elements that had been included in the draft plans for consultation were substantially improved. This improvement is in large part a reflection of the dialogue that occurred between the agencies and Congress and is therefore also a reflection of the value of the Results Act requirement for such consultations. These plans appear to provide a workable foundation for the next phase of the Results Act’s implementation—annual performance planning and measurement. As Congress and agencies build on the strategic planning and other Results Act efforts undertaken thus far, our work suggests that several critical issues will have to be addressed if the Results Act is to succeed in improving the management of federal agencies. Among these critical issues are the need to (1) clearly establish a strategic direction for agencies by improving goal-setting and performance measurement; (2) improve the management of crosscutting program efforts by ensuring that those programs are appropriately coordinated; and (3) ensure that agencies have the data systems and analytic capacity in place to better assess program results and costs, improve management and performance, and establish accountability. The forthcoming annual performance planning and measurement and performance-reporting phases of the Results Act provide important opportunities to address these long-standing management issues. It appears that agencies generally have taken the first steps toward establishing a strategic direction in their September plans, which should be useful to agencies as they move to the next phase of performance-based management—that is, performance planning and measurement. However, the strategic plans are still very much works in progress, and agencies will likely need to revisit their strategic planning efforts as they develop the forthcoming annual performance plans. As agencies develop those plans, they will need to ensure that goals and strategies are appropriate given the current fiscal environment and that goal-setting and performance measurement efforts form the basis for managing program products, services, and daily activities. We found that agencies need to continue to make progress in refining goals and objectives to better specify the results that they intend to achieve. For example, the Department of Health and Human Services has made progress over the last few months in developing objectives that, for the most part, are results-oriented and measurable. However, ensuring that goals are as results-oriented as they can be and are expressed in a manner that enables a subsequent assessment of whether the goals were achieved is a continuing challenge for agencies and Congress. As an agency develops its performance plan, which is to contain the annual goals it will use to track progress toward its longer term strategic goals, it likely will identify opportunities to revise and clarify those strategic goals in order to provide a better grounding for the direction of the agency. In addition, as an agency seeks to further refine its goals, it also will need to ensure that it can articulate linkages between strategies, programs, and initiatives to achieve those goals. We noted some improvements in the September plans; however, we found that those plans did not always establish clear linkages between goals, objectives, and strategies. The annual performance plans represent the next chance for agencies to establish such linkages so that agency managers and Congress will be better able to judge whether an agency is making annual progress toward achieving strategic goals. Thus, as agencies and Congress begin to implement annual performance planning, it will be particularly important to reinforce linkages among goals and activities. Specifically, our work has shown that the successful implementation of performance-based management as envisioned by the Results Act will require agencies to link the goals and performance measures of each organizational level to successive levels and ultimately to the strategic plan’s long-term goals so that the strategic goals and objectives drive the agencies’ day-to-day activities. Therefore, agencies’ annual performance plans will be most useful if the annual goals contained in those plans show clear and direct relationships in two directions—to the goals in the strategic plans and to operations and activities within the agency. Concerning the plans’ discussions of agencies’ operations and activities, in some cases, the September strategic plans improved on the draft plans and now provide a better basis for understanding how the agency plans to accomplish many of its goals. For example, the plan for the Department of Energy (DOE) contains a section on resource requirements that provides a helpful discussion of the money, staff, workforce skills, and facilities that the agency plans to employ to meet its goals. The plan explains that DOE’s strategies for its goal of supporting national security are to include changes in the skills of its workforce and in the construction of new experimental test facilities. On the whole, however, agencies’ consideration of the resources necessary to achieve goals is one particular area where continuing improvement efforts are needed. The annual performance planning process offers an opportunity for substantial progress in this area. While some of the plans we reviewed contain separate sections on resources, including financial and human resources, the sections sometimes lack a discussion of information, capital, and other resources that are critical to achieving goals. For example, few plans discuss physical capital resources, such as facilities and equipment. Although many agencies may not rely heavily on physical capital resources, some of those that do, such as the General Services Administration (GSA) and the National Park Service, a component of the Department of the Interior, appear to provide relatively little focused discussion on their capital needs and usage. Another area that is critical to agencies striving to improve operations is information technology. The government’s track record in employing information technology to improve operations and address mission-critical problems is poor, and the strategic plans we reviewed often contain only limited discussions of technology issues. For example, GSA’s plan does not explicitly discuss major management problems or identify which problems could have an adverse impact on the agency’s meeting its goals and objectives. The plan does not address, for instance, how GSA plans to ensure that its information systems meet computer security requirements. The lack of such a discussion in the GSA and other plans is of particular concern because without it agencies cannot be certain that they are (1) addressing the federal government’s information technology problems and (2) better ensuring that technology acquisition and use are targeted squarely on program results. Linking performance goals to the federal government’s budget and appropriations processes is another area where establishing clear linkages will be especially important as agencies and Congress move to implementation of the annual performance planning and measurement phase of the Results Act. Unlike previous federal initiatives, the Results Act requires agencies to plan and measure performance using the same structures that form the basis for their budget requests. This critical design element is meant to ensure a simple, straightforward link among plans, budgets, and performance information and the related congressional oversight and resource allocation processes. However, the extent to which existing budget structures are suitable for Results Act purposes will likely vary widely and therefore will require coordinated and recurring attention by Congress and the agencies. A focus on results, as envisioned by the Results Act, implies that federal programs that contribute to the same or similar results should be closely coordinated to ensure that goals are consistent and, as appropriate, program efforts are mutually reinforcing. This suggests that federal agencies are to look beyond their organizational boundaries and coordinate with other agencies to ensure that their efforts are aligned. We have found that uncoordinated program efforts can waste scarce funds, confuse and frustrate program customers, and limit the overall effectiveness of the federal effort. During the summer of 1996, in reviewing early strategic planning efforts, OMB alerted agencies that augmented interagency coordination was needed at that time to ensure consistency among goals in crosscutting programs areas. It appears that the agencies did not consistently follow OMB’s advice because the draft strategic plans we reviewed this summer often lacked evidence that agencies in crosscutting program areas had worked with other agencies to ensure goals were consistent, strategies coordinated, and, as appropriate, performance measures similar. Since then, however, the agencies appear to have begun the necessary coordination. Some September plans, for example, often contained references to other agencies that shared responsibilities in a crosscutting program area or discussed the need to coordinate their programs with other agencies. For example, the September plan of the Environmental Protection Agency (EPA) contains an appendix that lists the federal agencies with which EPA coordinated. This appendix describes the major steps in the coordination process and lists by strategic goal the agencies with which greater integration and review of efforts will be needed. Similarly, the Department of Transportation’s plan contains a table that shows the contributions of other federal agencies to each of its major mission areas. coordination to ensure that those responsibilities are being effectively managed. The next phases of the Results Act implementation continue to offer a structured framework to address crosscutting issues. For example, the Act’s emphasis on results-based performance measures as part of the annual performance planning process should lead to more explicit discussions concerning the contributions and accomplishments of crosscutting programs and encourage related programs to develop common performance measures. As agencies work with OMB to develop their annual performance plans, they can consider the extent to which agency goals are complementary and the need for common performance measures to allow for cross-agency evaluations. The Results Act’s requirement that OMB prepare a governmentwide performance plan that is based on the agencies’ annual performance plans also can be used to facilitate the identification of program overlap, duplication, and fragmentation. If agencies and OMB use the annual planning process to highlight crosscutting program issues, the individual agency performance plans and the governmentwide performance plan should provide Congress with the information needed to identify agencies and programs addressing similar missions. Once these programs are identified, Congress can consider the associated policy, management, and performance implications of crosscutting program issues. This information should also help identify the performance and cost consequences of program fragmentation and the implications of alternative policy and service delivery options. These options, in turn, can lead to decisions concerning department and agency missions and the allocation of resources among those missions. provide important information about why the program did or did not succeed as planned and suggest ways to improve it. We have cited a 1994 survey that reported on the widespread absence of a program evaluation capacity within the federal government. Therefore, it is not surprising that agencies did not consistently discuss in their September plans how they intended to use program evaluations to help develop those plans. However, of greater concern, many agencies also did not discuss how they planned to use evaluations in the future to assess progress and did not offer a schedule for future evaluations as envisioned by the Results Act. The National Science Foundation’s September plan contains a noteworthy exception to this trend. The plan discusses how the agency used evaluations to develop key investment strategies, action plans, and its annual performance plan. It also discusses future evaluations and provides a general schedule for their implementation. The absence of sound program performance and cost data and the capacity to use those data to improve performance are among the major barriers to the effective implementation of the Results Act. Efforts under the CFO Act have shown that most agencies are still years away from generating reliable, useful, relevant, and timely financial information, which is urgently needed to make our government fiscally responsible. The widespread lack of available program performance information is equally troubling. For example, we surveyed managers in the largest federal agencies and found that fewer than one-third of those managers reported that results-oriented performance measures existed to a great or very great extent for their programs. agencies were experiencing as a result of their reliance on outside parties for performance information. Agencies are required under the Results Act to describe in their annual performance plans how they will verify and validate the performance information that will be collected. This section of the performance plan can provide important contextual information for Congress and agencies. For example, this section can provide an agency with the opportunity to alert Congress to the problems the agency has had or anticipates having in collecting needed results-oriented performance information and the cost and data quality trade-offs associated with various collection strategies. The discussion in this section can also provide Congress with a mechanism for examining whether the agency currently has the data to confidently set performance improvement targets and will later have the ability to report on its performance. More broadly, continuing efforts to implement the CFO Act also are central for ensuring that agencies resolve their long-standing problems in generating vital information for decisionmakers. In that regard, the Federal Accounting Standards Advisory Board (FASAB) has developed a new set of accounting concepts and standards that underpin OMB’s guidance to agencies on the form and content of their agencywide financial statements. As part of that effort, FASAB developed managerial cost accounting standards that were to be effective for fiscal year 1997. However, because of serious agency shortfalls in cost accounting systems, the CFO Council—an interagency council of the CFOs of the major agencies—requested an additional 2 years before the standard would be effective. FASAB recommended extending the date by 1 year, to fiscal year 1998, with a clear expectation that there would be no further delays. The FASAB cost accounting standards promise to improve decisionmaking if successfully implemented. These standards are to provide decisionmakers with information on the costs of all resources used and the costs of services provided by others to support activities or programs. Such information would allow for comparisons of costs to various levels of program performance. Over the longer term, the program performance information that agencies are to generate under the Results Act should be a valuable new resource for Congress to use in its program authorization, oversight, budget, and appropriation responsibilities. To be most useful in these various contexts, that information needs to be consolidated with budget data and critical financial and program cost data, which agencies are to produce and have audited under the CFO Act. Accountability reports—building on CFO Act requirements—are to bring together program performance information with audited financial information to provide congressional and other decisionmakers with a more complete picture of the results, operational effectiveness, and costs of agencies’ operations. For the first time, decisionmakers are to be provided with annual “report cards” on the costs, management, and effectiveness of federal agencies. In summary, Mr. Chairman, because of the progress that agencies have made in developing their strategic plans over the last several months, these plans generally should provide a workable foundation for the agencies’ continuing efforts to move to a more performance-based form of management. Much of this progress appears to have been the result of the active participation of Members and congressional staff in consulting on those plans. While difficult implementation challenges remain, by taking advantage of the consultation process, Congress and the agencies established the basis for continued progress in implementing the Results Act and ensuring that efforts under the Act provide the information that agency and congressional decisionmakers need to improve the management of the federal government. The Results Act establishes an iterative process for performance-based management, with the foundation being the agency’s strategic plan. The next step—the annual performance plans—offer the opportunity for Congress and the agencies to continue to clarify goals and ensure that proper strategies are in place to achieve those goals. Agencies’ annual plans and the governmentwide performance plan prepared by the President can form the basis for agency and congressional decisionmaking about the best way to manage crosscutting program efforts. Finally, the annual plans, and later accountability reports, provide mechanisms for highlighting and addressing issues centering on the collection and analysis of program performance and cost information. We look forward to continuing to support Congress’ efforts to improve the management of the federal government. Over the last few years, we have issued a number of products on the key steps and practices needed to improve the management of the federal government. These key steps and practices are based on best practices in private sector and public sector organizations. For example, last May we issued a guide for congressional staff to use as they assessed the strategic plans that agencies provided as part of the consultations required by the Results Act. In the coming months, we will issue a companion guide for reviewing annual performance plans. We also will continue to examine the effectiveness of agencies’ efforts under the Results Act and will program work on other issues associated with the implementation of the Act. This concludes my prepared statement. I would be pleased to respond to any questions you or other Members of the Committee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed (1) the strategic plans that agencies submitted to Congress and the Office of Management and Budget in September; and (2) how Congress and the agencies can build on those plans to more effectively implement the Government Performance and Results Act. GAO noted that: (1) because of the progress that agencies have made in developing their strategic plans over the last several months, these plans generally should provide a workable foundation for the agencies' continuing efforts to move to a more performance-based form of management; (2) much of this progress appears to have been the result of the active participation of members and congressional staff in consulting on those plans; (3) while difficult implementation challenges remain, by taking advantage of the consultation process, Congress and the agencies established the basis for continued progress in implementing the Results Act and ensuring that efforts under the act provide the information that agency and congressional decisionmakers need to improve the management of the federal government; (4) the Results Act establishes an iterative process for performance-based management, with the foundation being the agency's strategic plan; (5) the next step--annual performance plans--offers the opportunity for Congress and the agencies to continue to clarify goals and ensure that proper strategies are in place to achieve those goals; (6) agencies' annual plans and the governmentwide performance plan prepared by the President can form the basis for agency and congressional decisionmaking about the best way to manage crosscutting program efforts; and (7) the annual plans, and later accountability reports, provide mechanisms for highlighting and addressing issues centering on the collection and analysis of program performance and cost information.
As computer technology has advanced, federal agencies have become dependent on computerized information systems to carry out their operations and to process, maintain, and report essential information. Virtually all federal operations are supported by computer systems and electronic data, and agencies would find it difficult, if not impossible, to carry out their missions, deliver services to the public, and account for their resources without these cyber assets. Information security is thus especially important for federal agencies to ensure the confidentiality, integrity, and availability of their systems and data. Conversely, ineffective information security controls can result in significant risk to a broad array of government operations and assets, as the following examples illustrate: Computer resources could be used for unauthorized purposes or to launch attacks on other computer systems. Sensitive information, such as personally identifiable information, intellectual property, and proprietary business information could be inappropriately disclosed, browsed, or copied for purposes of identity theft, espionage, or other types of crime. Critical operations, such as those supporting critical infrastructure, national defense, and emergency services, could be disrupted. Data could be added, modified, or deleted for purposes of fraud, subterfuge, or disruption. Government officials are increasingly concerned about attacks from individuals and groups with malicious intent, such as criminals, terrorists, and adversarial foreign nations. For example, in February 2009, the Director of National Intelligence testified that foreign nations and criminals have targeted government and private sector networks to gain a competitive advantage and potentially disrupt or destroy them, and that terrorist groups have expressed a desire to use cyber attacks as a means to target the United States. The growing connectivity between information systems, the Internet, and other infrastructures creates opportunities for attackers to disrupt telecommunications, electrical power, and other critical infrastructures. As government, private sector, and personal activities continue to move to networked operations, digital systems add ever more capabilities, wireless systems become more ubiquitous, and the design, manufacture, and service of information technology have moved overseas, the threat will continue to grow. Federal law and policy establish DHS as the focal point for efforts to protect our nation’s computer-reliant critical infrastructures —a practice known as cyber critical infrastructure protection, or cyber CIP. In this capacity, the department has multiple cybersecurity-related roles and responsibilities. In 2005, we identified, and reported on, 13 key cybersecurity responsibilities. They include, among others, (1) developing a comprehensive national plan for CIP, including cybersecurity; (2) developing partnerships and coordinating with other federal agencies, state and local governments, and the private sector; (3) developing and enhancing national cyber analysis and warning capabilities; (4) providing and coordinating incident response and recovery planning, including conducting incident response exercises; and (5) identifying, assessing, and supporting efforts to reduce cyber threats and vulnerabilities, including those associated with infrastructure control systems. Within DHS, the National Protection and Programs Directorate has primary responsibility for assuring the security, resiliency, and reliability of the nation’s cyber and communications infrastructure. DHS is also responsible for securing its own computer networks, systems, and information. FISMA requires the department to develop and implement an agencywide information security program to provide security for the information and information systems that support the operations and assets of the agency. Within DHS, the Chief Information Officer is responsible for ensuring departmental compliance with federal information security requirements. FISMA tasks NIST—a component within the Department of Commerce— with responsibility for developing standards and guidelines, including minimum requirements, for (1) information systems used or operated by an agency or by a contractor of an agency or other organization on behalf of the agency and (2) providing adequate information security for all agency operations and assets, except for national security systems. The act specifically required NIST to develop, for systems other than national security systems, (1) standards to be used by all agencies to categorize all their information and information systems based on the objectives of providing appropriate levels of information security, according to a range of risk levels; (2) guidelines recommending the types of information and information systems to be included in each category; and (3) minimum information security requirements for information and information systems in each category. NIST also is required to develop a definition of and guidelines for detection and handling of information security incidents as well as guidelines developed in conjunction with the Department of Defense and the National Security Agency for identifying an information system as a national security system. Within NIST, the Computer Security Division of the Information Technology Laboratory is responsible for developing information security related standards and guidelines. FISMA also requires NIST to take other actions that include: conducting research, as needed, to determine the nature and extent of information security vulnerabilities and techniques for providing cost- effective information security; developing and periodically revising performance indicators and measures for agency information security policies and practices; evaluating private sector information security policies and practices and commercially available information technologies, to assess potential application by agencies to strengthen information security; and assisting the private sector, in using and applying the results of its activities required by FISMA. In addition, the Cyber Security Research and Development Act required NIST to develop checklists to minimize the security risks for each hardware or software system that is, or likely to become, widely used within the federal government. FISMA also requires the Office of Management and Budget (OMB) to develop policies, principles, standards, and guidelines on information security and to report annually to Congress on agency compliance with the requirements of the act. OMB has provided instructions to federal agencies and their inspectors general for preparing annual FISMA reports. These instructions focus on metrics related to the performance of key control activities such as developing a complete inventory of major information systems, providing security training to personnel, testing and evaluating security controls, testing contingency plans, and certifying and accrediting systems. FISMA reporting provides valuable information on the status and progress of agency efforts to implement effective security management programs. Because the threats to federal information systems and critical infrastructure have persisted and grown, President Bush in January 2008 began to implement a series of initiatives—commonly referred to as the Comprehensive National Cybersecurity Initiative aimed primarily at improving DHS’s and other federal agencies’ efforts to protect against intrusion attempts and anticipate future threats. Since then, President Obama (in February 2009) directed the National Security Council and Homeland Security Council to conduct a comprehensive review to assess the United States’ cyber security related policies and structures. The resulting report, “Cyberspace Policy Review: Assuring a Trusted and Resilient Information and Communications Infrastructure,” recommended, among other things, appointing an official in the White House to coordinate the nation’s cybersecurity policies and activities, creating a new national cybersecurity strategy, and developing a framework for cyber research and development. In addition, we testified in March 2009 that a panel of experts identified 12 key areas of the national cybersecurity strategy requiring improvement, such as developing a national strategy that clearly articulates strategic objectives, goals, and priorities; bolstering the public/private partnership; and placing a greater emphasis on cybersecurity research and development. We have reported since 2005 that DHS has yet to comprehensively satisfy its key responsibilities for protecting computer-reliant critical infrastructures. Our reports included about 90 recommendations that we summarized into key areas, including those listed in table 1, that are essential for DHS to address in order to fully implement its responsibilities. DHS has since developed and implemented certain capabilities to satisfy aspects of its responsibilities, but the department still has not fully implemented our recommendations, and thus further action needs to be taken to address these areas. In July 2008, we identified that cyber analysis and warning capabilities included (1) monitoring network activity to detect anomalies, (2) analyzing information and investigating anomalies to determine whether they are threats, (3) warning appropriate officials with timely and actionable threat and mitigation information, and (4) responding to the threat. These four capabilities are comprised of 15 key attributes, including establishing a baseline understanding of the nation’s critical network assets and integrating analysis work into predictive analyses of broader implications or potential future attacks. We concluded that while DHS’s United States Computer Emergency Readiness Team (US-CERT) demonstrated aspects of each of the key attributes, it did not fully incorporate all of them. For example, as part of its monitoring, US-CERT obtained information from numerous external information sources; however, it had not established a baseline of the nation’s critical network assets and operations. In addition, while it investigated whether identified anomalies constituted actual cyber threats or attacks as part of its analysis, it did not integrate its work into predictive analyses of broader implications or potential future attacks, nor did it have the analytical or technical resources to analyze multiple, simultaneous cyber incidents. The organization also provided warnings by developing and distributing a wide array of attack and other notifications; however, these notifications were not consistently actionable or timely— i.e., providing the right information to the right persons or groups as early as possible to give them time to take appropriate action. Further, while the team responded to a limited number of affected entities in its efforts to contain and mitigate an attack, recover from damages, and remediate vulnerabilities, it did not possess the resources to handle multiple events across the nation. We also concluded that without fully implementing the key attributes, US- CERT did not have the full complement of cyber analysis and warning capabilities essential to effectively perform its national mission. As a result, we made 10 recommendations to the department to address shortfalls associated with the 15 attributes in order to fully establish a national cyber analysis and warning capability. DHS concurred and agreed to implement 9 of our 10 recommendations. In a September 2007 report and October 2007 testimony, we reported that DHS was sponsoring multiple control systems security initiatives, including an effort to improve control systems cybersecurity using vulnerability evaluation and response tools. However, DHS had not established a strategy to coordinate the various control systems activities across federal agencies and the private sector, and it did not effectively share information on control system vulnerabilities with the public and private sectors. Accordingly, we recommended that DHS develop a strategy to guide efforts for securing control systems and establish a rapid and secure process for sharing sensitive control system vulnerability information. In response, DHS recently began developing a strategy and a process to share sensitive information. We reported and later testified in 2006 that the department had begun a variety of initiatives to fulfill its responsibility for developing an integrated public/private plan for Internet recovery in case of a major disruption. However, we determined that these efforts were not comprehensive or complete. As such, we recommended that DHS implement nine actions to improve the department’s ability to facilitate public/private efforts to recover the Internet. In October 2007, we testified that the department had made progress in implementing our recommendations; however, seven of the nine had not been completed. For example, it revised key plans in coordination with private industry infrastructure stakeholders, coordinated various Internet recovery-related activities, and addressed key challenges to Internet recovery planning. However, it has not, among other things, finalized recovery plans and defined the interdependencies among DHS’s various working groups and initiatives. In other words, it has not completed an integrated private/public plan for Internet recovery. As a result, we concluded that the nation lacked direction from the department on how to respond in such a contingency. We also noted that these incomplete efforts indicated that DHS and the nation were not fully prepared to respond to a major Internet disruption. To date, an integrated public/private plan for Internet recovery does not exist. In June 2008, we reported on the status of DHS’s efforts to establish an integrated operations center that it agreed to adopt per recommendations from a DHS-commissioned expert task force. We determined that while DHS had taken the first step towards integrating two operations centers— the National Coordination Center Watch and US-CERT, it had yet to implement the remaining steps, complete a strategic plan, or develop specific tasks and milestones for completing the integration. We concluded that until the two centers were fully integrated, DHS was at risk of being unable to efficiently plan for and respond to disruptions to communications infrastructure and the data and applications that travel on this infrastructure, increasing the probability that communications will be unavailable or limited in times of need. As a result, we recommended that the department complete its strategic plan and define tasks and milestones for completing remaining integration steps so that we are better prepared to provide an integrated response to disruptions to the communications infrastructure. DHS concurred with our first recommendation and stated that it would address the second recommendation as part of finalizing its strategic plan. In September 2008, we reported on a major DHS-coordinated cyber attack exercise called Cyber Storm, which occurred in 2006 and included large-scale simulations of multiple concurrent attacks involving the federal government, states, foreign governments, and private industry. We determined that DHS had identified eight lessons learned from this exercise, such as the need to improve interagency coordination groups and the exercise program. We also concluded that while DHS had demonstrated progress in addressing the lessons learned, more needed to be done. Specifically, while the department completed 42 of the 66 activities identified to address the lessons learned, it identified 16 activities as ongoing and 7 as planned for the future. In addition, DHS provided no timetable for the completion dates of the ongoing activities. We noted that until DHS scheduled and completed its remaining activities, it was at risk of conducting subsequent exercises that repeated the lessons learned during the first exercise. Consequently, we recommended that DHS schedule and complete the identified corrective activities so that its cyber exercises can help both public and private sector participants coordinate their responses to significant cyber incidents. DHS agreed with the recommendation. To date, DHS has continued to make progress in completing some identified activities but has yet to do so for others. In 2007, we reported and testified on the cybersecurity aspects of CIP plans for 17 critical infrastructure sectors, referred to as sector-specific plans. Lead federal agencies, referred to as sector-specific agencies, are responsible for coordinating critical infrastructure protection efforts with the public and private stakeholders in their respective sectors. DHS guidance requires each of the sector-specific agencies to develop plans to address how the sectors’ stakeholders would implement the national plan and how they would improve the security of their assets, systems, networks, and functions. We determined that none of the plans fully addressed the 30 key cybersecurity-related criteria described in DHS guidance. Further, while several sectors’ plans fully addressed many of the criteria, others were less comprehensive. In addition to the variations in the extent to which the plans covered aspects of cybersecurity, there was also variance among the plans in the extent to which certain criteria were addressed. Consequently, we recommended that DHS request that the sector-specific agencies, fully address all cyber-related criteria by September 2008 so that stakeholders within the infrastructure sectors will effectively identify, prioritize, and protect the cyber aspects of their CIP efforts. We are currently reviewing the progress made in the sector specific plans. We testified in March 2009 regarding the need to bolster public/private partnerships associated with cyber CIP. According to panel members, there are not adequate economic and other incentives (i.e. a value proposition) for greater investment and partnering with owners and operators of critical cyber assets and functions. Accordingly, panelists stated that the federal government should provide valued services (such as offering useful threat or analysis and warning information) or incentives (such as grants or tax reductions) to encourage action by and effective partnerships with the private sector. They also suggested that public and private sector entities use means such as cost-benefit analyses to ensure the efficient use of limited cybersecurity-related resources. We are also currently initiating a review of the status of the public/private partnerships in cyber CIP. Besides weaknesses relating to external cybersecurity responsibilities, DHS had not secured its own information systems. In July 2007, we reported that DHS systems supporting the US-VISIT program were riddled with significant information security control weaknesses that place sensitive information—including personally identifiable information—at increased risk of unauthorized and possibly undetected disclosure and modification, misuse, and destruction, and place program operations at increased risk of disruption. Weaknesses existed in all control areas and computing device types reviewed. For example, DHS had not implemented controls to effectively prevent, limit, and detect access to computer networks, systems, and information. To illustrate, it had not (1) adequately identified and authenticated users in systems supporting US-VISIT, (2) sufficiently limited access to US-VISIT information and information systems, and (3) ensured that controls adequately protected external and internal network boundaries. In addition, it had not always ensured that responsibilities for systems development and system production had been sufficiently segregated, and had not consistently maintained secure configurations on the application servers and workstations at a key data center and ports of entry. As a result, intruders, as well as government and contractor employees, could potentially bypass or disable computer access controls and undertake a wide variety of inappropriate or malicious acts. These acts could include tampering with data; browsing sensitive information; using computer resources for inappropriate purposes, such as launching attacks on other organizations; and disrupting or disabling computer-supported operations. According to the department, it has started remediation activities to strengthen security over these systems and implement our recommendations. In January 2009, we briefed congressional staff on security weaknesses associated with the development of systems supporting the Transportation Security Administration’s (TSA) Secure Flight program. Specifically, TSA had not taken sufficient steps to ensure that operational safeguards and substantial security measures were fully implemented to minimize the risk that the systems will be vulnerable to abuse and unauthorized access from hackers and other intruders. For example, TSA had not completed testing and evaluating key security controls, performed disaster recovery tests, or corrected high- and moderate-risk vulnerabilities. Accordingly, we recommended that TSA take steps to complete security testing, mitigate known vulnerabilities, and update key security documentation prior to initial operations. TSA subsequently undertook a number of actions to complete these activities. In May 2009, we concluded that TSA had generally met its requirements related to systems information security and satisfied our recommendations. NIST has taken steps to address its FISMA-mandated responsibilities by developing a suite of required security standards and guidelines as well as other publications that are intended to assist agencies in developing and implementing information security programs and effectively managing risks to agency operations and assets. In addition to developing specific standards and guidelines, NIST developed a set of activities to help agencies manage a risk-based approach for an effective information security program. These activities are known as the NIST Risk Management Framework. Several special publications support this framework and collectively provide guidance that agencies can apply to their information security programs for selecting the appropriate security controls for information systems—including the minimum controls necessary to protect individuals and the operations and assets of the organization. NIST has developed and issued the following documents to meet its FISMA mandated responsibilities: Federal Information Processing Standards Publication 199, Standards for Security Categorization of Federal Information and Information Systems, February 2004. This standard addresses NIST’s requirement for developing standards for categorizing information and information systems. It requires agencies to categorize their information systems as low-impact, moderate-impact, or high-impact for the security objectives of confidentiality, integrity, and availability. The security categories are based on the harm or potential impact to an organization should certain events occur which jeopardize the information and information systems needed by the organization to accomplish its assigned mission, protect its assets, fulfill its legal responsibilities, maintain its day-to-day functions, and protect individuals. Security categories are to be used in conjunction with vulnerability and threat information in assessing the risk to an organization. Special Publication 800-60 Volume I, revision 1, Volume I: Guide for Mapping Types of Information and Information Systems to Security Categories, August 2008. This guide is to assist federal government agencies with categorizing information and information systems. It is intended to help agencies consistently map security impact levels to types of (1) information (e.g., privacy, medical, proprietary, financial, investigation); and (2) information systems (e.g., mission critical, mission support, administrative). Furthermore, it is intended to facilitate application of appropriate levels of information security according to a range of levels of impact or consequences that might result from the unauthorized disclosure, modification, or use of the information or information system. Federal Information Processing Standards Publication 200, Minimum Security Requirements for Federal Information and Information Systems, March 2006. This is the second of the mandatory security standards and specifies minimum security requirements for information and information systems supporting the executive agencies of the federal government and a risk-based process for selecting the security controls necessary to satisfy the minimum security requirements. Specifically, this standard specifies minimum security requirements for federal information and information systems in 17 security-related areas. Federal agencies are required to meet the minimum security requirements through the use of the security controls in accordance with NIST Special Publication 800-53. Special Publication 800-61, revision 1, Computer Security Incident Handling Guide, March 2008. This publication is intended to assist organizations in establishing computer security incident response capabilities and handling incidents efficiently and effectively. It provides guidelines for organizing a computer security incident response capability; handling incidents from initial preparation through post-incident lessons learned phase; and handling specific types of incidents, such as denial of service, malicious code, unauthorized access, and inappropriate usage. Special Publication 800-59, Guideline for Identifying an Information System as a National Security System, August 2003. The purpose of this guide is to assist agencies in determining which, if any, of their systems are national security systems as defined by FISMA and are to be governed by applicable requirements for such systems. Special Publication 800-55, revision 1, Performance Measurement Guide for Information Security, July 2008. The purpose of this guide is to assist in the development, selection, and implementation of measures to be used at the information system and program levels. These measures indicate the effectiveness of security controls applied to information systems and supporting information security programs. Special Publication 800-30, Risk Management Guide for Information Technology Systems, July 2002. This guide provides a foundation for the development of an effective risk management program, containing both the definitions and the practical guidance necessary for assessing and mitigating risks identified within IT systems. It also provides information on the selection of cost-effective security controls that can be used to mitigate risk for the better protection of mission-critical information and the IT systems that process, store, and carry this information. Special Publication 800-18, revision 1, Guide for Developing Security Plans for Federal Information Systems, February 2006. This guide provides basic information on how to prepare a system security plan and is designed to be adaptable in a variety of organizational structures and used as a reference by those having assigned responsibility for activities related to security planning. NIST is also in the process of developing, updating, and revising a number of special publications related to information security, including the following: Special Publication 800-37, revision 1, Guide for Security Authorization of Federal Information Systems, August 2008. This publication is intended to, among other things, support the development of a common security authorization process for federal information systems. According to NIST, the new security authorization process changes the traditional focus from the stove-pipe, organization-centric, static-based approaches and provides the capability to more effectively manage information system-related security risks in highly dynamic environments of complex and sophisticated cyber threats, ever increasing system vulnerabilities, and rapidly changing missions. The process is designed to be tightly integrated into enterprise architectures and ongoing system development life cycle processes, promote the concept of near real-time risk management, and capitalize on current and previous investments in technology, including automated support tools. Special Publication 800-39, second public draft, Managing Risk from Information Systems An Organizational Perspective, April 2008. The purpose of this publication is to provide guidelines for managing risk to organizational operations and assets, individuals, other organizations, and the nation resulting from the operation and use of information systems. According to NIST, the risk management concepts described in the publication are intentionally broad-based, with the specific details of assessing risk and employing appropriate risk mitigation strategies provided by supporting NIST security standards and guidelines. Special Publication 800-53, revision 3, Recommended Security Controls for Federal Information Systems and Organizations, June 2009. This publication has been updated from the previous versions to include a standardized set of management, operational, and technical controls intended to provide a common specification language for information security for federal information systems processing, storing, and transmitting both national security and non national security information. Draft IR-7502, The Common Configuration Scoring System (CCSS): Metrics for Software Security Configuration Vulnerabilities. This publication defines proposed measures for the severity of software security configuration issues and provides equations that can be used to combine the measures into severity scores for each configuration issue. In addition, NIST has other ongoing and planned activities that are intended to enhance information security programs, processes, and controls. For example, it is supporting the development of a program for credentialing public and private sector organizations to provide security assessment services for federal agencies. To support implementation of the credentialing program and aid security assessments, NIST is participating or will participate in the following initiatives: Training includes development of training courses, NIST publication quick start guides, and frequently asked questions to establish a common understanding of the standards and guidelines supporting the NIST Risk Management Framework. Product and Services Assurance Assessment includes defining criteria and guidelines for evaluating products and services used in the implementation of controls outlined in NIST SP 800-53. Support Tools includes identifying or developing common protocols, programs, reference materials, checklists, and technical guides supporting implementation and assessment of SP 800-53-based security controls in information systems. Mapping initiative includes identifying common relationships and the mappings of FISMA standards, guidelines, and requirements with International Organization for Standardization (ISO) standards for information security management, quality management, and laboratory testing and accreditation. These planned efforts include implementing a program for validating security tools. NIST collaborated with a broad constituency—federal and nonfederal—to develop documents to assist information security professionals. For example, NIST worked with the Office of the Director of National Intelligence, the Department of Defense, and the Committee on National Security Systems to develop a common process for authorizing federal information systems for operation. This resulted in a major revision to NIST Special Publication 800-37, currently issued as an initial public draft. NIST also collaborated with these organizations on Special Publication 800-53 and Special Publication 800-53A to provide guidelines for selecting and specifying security controls for federal government information systems and to help agencies develop plans and procedures for assessing the effectiveness of these controls. NIST also interacted with the DHS to incorporate guidance on safeguards and countermeasures for federal industrial control systems in Special Publication 800-53. NIST is also working with public and private sector entities to establish specific mappings and relationships between the security standards and guidelines developed by NIST and the ISO and International Electrotechnical Commission Information Security Management System standard. For example, the latest draft of Special Publication 800-53 introduces a three-part strategy for harmonizing the FISMA security standards and guidelines with international security standards including an updated mapping table for security controls. NIST also undertook other information security activities, including developing Federal Desktop Core Configuration checklists and continuing a program of outreach and awareness through organizations such as the Federal Computer Security Program Managers’ Forum and the Federal Information Systems Security Educators’ Association. Through NIST’s efforts, agencies have access to additional tools and guidance that can be applied to their information security programs. Despite federal agencies reporting increased compliance in implementing key information security control activities for fiscal year 2008, opportunities exist to improve the metrics used in annual reporting. The information security metrics developed by OMB focus on compliance with information security requirements and the implementation of key control activities. OMB requires federal agencies to report on key information security control activities as part of the FISMA-mandated annual report on federal information security. To facilitate the collection and reporting of information from federal agencies, OMB developed a suite of information security metrics, including the following: percentage of employees and contractors receiving security awareness percentage of employees with significant security responsibilities receiving specialized security training, percentage of systems tested and evaluated annually, percentage of systems with tested contingency plans, percentage of agencies with complete inventories of major systems, and percentage of systems certified and accredited. In May 2009, we testified that federal agencies generally reported increased compliance in implementing most of the key information security control activities for fiscal year 2008, as illustrated in figure 1. However, reviews at 24 major federal agencies continue to highlight deficiencies in their implementation of information security policies and procedures. For example, in their fiscal year 2008 performance and accountability reports, 20 of 24 major agencies noted that their information system controls over their financial systems and information were either a material weakness or a significant deficiency. In addition, 23 of the 24 agencies did not have adequate controls in place to ensure that only authorized individuals could access or manipulate data on their systems and networks. We also reported that agencies did not consistently (1) identify and authenticate users to prevent unauthorized access; (2) enforce the principle of least privilege to ensure that authorized access was necessary and appropriate; (3) establish sufficient boundary protection mechanisms; (4) apply encryption to protect sensitive data on networks and portable devices; and (5) log, audit, and monitor security- relevant events. Furthermore, those agencies also had weaknesses in their agencywide information security programs. An underlying reason for the apparent dichotomy of increased compliance with security requirements and continued deficiencies in security controls is that the metrics defined by OMB and used for annual information security reporting do not generally measure the effectiveness of the controls and processes that are key to implementing an agencywide security program. Results of our prior and ongoing work indicated that, for example, annual reporting did not always provide information on the quality or effectiveness of the processes agencies use to implement information security controls. Providing information on the effectiveness of controls and processes could further enhance the usefulness of the data for management and oversight of agency information security programs. In summary, DHS has not fully satisfied aspects of its key cybersecurity responsibilities, one of which includes its efforts to protect our nation’s cyber critical infrastructure and still needs to take further action to address the key areas identified in our recent reports, including enhancing partnerships with the private sector. In addition, although DHS has taken actions to remedy security weaknesses in its Secure Flight program, it still needs to address our remaining recommendations for strengthening controls for systems supporting the US-VISIT program. In taking these actions, DHS can improve its own information security as well as increase its credibility to external parties in providing leadership on cybersecurity. NIST has developed a significant number of standards and guidelines for information security and continues to assist organizations in implementing security controls over their systems and information. While NIST’s role is to develop guidance, it remains the responsibility of federal agencies to effectively implement and sustain sufficient security over their systems. Developing and using metrics that measure how well agencies implement security controls can contribute to increased focus on the effective implementation of federal information security. Chairman Wu, this concludes my statement. I would be happy to answer questions at the appropriate time. If you have any questions regarding this report, please contact Gregory C. Wilshusen, Director, Information Security Issues at (202) 512-6244 or by e- mail at wilshuseng@gao.gov. Other key contributors to this report include Michael Gilmore (Assistant Director), Charles Vrabel (Assistant Director), Bradley Becker, Larry Crosland, Lee McCracken, and Jayne Wilson. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Federal laws and policy have assigned important roles and responsibilities to the Department of Homeland Security (DHS) and the National Institute of Standards and Technology (NIST) for securing computer networks and systems. DHS is charged with coordinating the protection of computer-reliant critical infrastructure--much of which is owned by the private sector--and securing its own computer systems, while NIST is responsible for developing standards and guidelines for implementing security controls over information and information systems. GAO was asked to describe cybersecurity efforts at DHS and NIST--including partnership activities with the private sector--and the use of cybersecurity performance metrics in the federal government. To do so, GAO relied on its reports on federal information security and federal efforts to fulfill national cybersecurity responsibilities. Since 2005, GAO has reported that DHS has yet to comprehensively satisfy its key cybersecurity responsibilities, including those related to establishing effective partnerships with the private sector. Shortcomings exist in key areas that are essential for DHS to address in order to fully implement its cybersecurity responsibilities. DHS has since developed and implemented certain capabilities, but still has not fully satisfied aspects of these responsibilities and needs to take further action to enhance the public/private partnerships needed to adequately protect cyber critical infrastructure. GAO has also previously reported on significant security weaknesses in systems supporting two of the department's programs, one that tracks foreign nationals entering and exiting the United States, and one for matching airline passenger information against terrorist watch-list records. DHS has corrected information security weaknesses for systems supporting the terrorist watch-list, but needs to take additional actions to mitigate vulnerabilities associated with systems tracking foreign nationals. NIST plays a key role in providing important information security standards and guidance. Pursuant to its responsibilities under the Federal Information Security Management Act (FISMA), NIST has developed standards specifying minimum security requirements for federal information and information systems; and provided corresponding guidance that details the controls necessary for securing those systems. It has also been working with both public and private sector entities to enhance information security requirements. The resulting guidance and tools provided by NIST serve as important resources for federal agencies that can be applied to information security programs. As GAO recently testified in May, opportunities exist to improve the metrics used to assess agency information security programs. According to the performance metrics established by the Office of Management and Budget (OMB), agencies reported increased compliance in implementing key information security control activities. However, GAO and agency inspectors general continue to report significant weaknesses in controls. This dichotomy exists in part because the OMB-defined metrics generally do not measure how well controls are implemented. As a result, reported metrics may provide an incomplete picture of an agency's information security program.
Inventory management and oversight for the Air Force is a shared responsibility between the Offices of the Secretary of Defense and the Secretary of the Air Force. The Under Secretary of Defense for Acquisition, Technology, and Logistics is responsible for developing and ensuring the uniform implementation of DOD inventory management policies throughout the department, monitoring the overall effectiveness and efficiency of the DOD logistics system, and continually developing improvements. The Secretary of the Air Force is responsible for implementing DOD inventory policies and procedures. The Air Force Materiel Command has issued a manual to its air logistics centers—Ogden Air Logistics Center, Oklahoma City Air Logistics Center, and Warner Robins Air Logistics Center—that prescribes guidance and procedural instructions for computing requirements for its secondary inventory. To assist in the management of its inventory, DOD summarizes its secondary inventory in its annual Supply System Inventory Report. This report is based on financial inventory and other inventory reports prepared by the military services and DLA. The report summarizes inventories by DOD component and inventory category. Over the past 4 years, DOD has reported a continuous increase in the value of its secondary item inventory in its Supply System Inventory Report. As of September 30, 2002, DOD reported that its secondary inventory was valued at about $67.0 billion; however, as of September 30, 2005, the value of this inventory had increased to about $79.6 billion—a $12.6 billion increase between 2002 and 2005. Table 1 shows the value of DOD’s on-hand inventory from fiscal year 2002 through fiscal year 2005 and the value and percentage of the inventory held by the Air Force. From fiscal year 2002 through fiscal year 2005, the Air Force’s total on- hand inventory increased by $1.2 billion, representing about 10 percent of the total $12.6 billion increase in DOD inventory during this period. This increase was primarily due to the addition of new items to the Air Force’s inventory in fiscal year 2005. Specifically, from September 30, 2002, through September 30, 2005, the Air Force added 2,331 new unique items with a total of about 179,425 individual parts that were valued at approximately $1.3 billion. Our analysis shows that increases in the Air Force’s inventory were also caused by changes in the value and quantity of the unique items in the inventory. We found that changes in the price of items in the Air Force’s secondary inventory resulted in a $0.8 billion increase in the value of its inventory in fiscal year 2005. Similarly, changes in the quantity of secondary inventory unique items that were on hand in fiscal year 2002 were the reason for a $0.7 billion increase in the value of DOD’s secondary inventory in fiscal year 2005. These price increases were offset by a decrease of $1.6 billion in the value of the Air Force’s inventory for items that were included in fiscal year 2002 but were not included in the inventory for fiscal year 2005. The Air Force uses a process called requirements determination to calculate the amount of inventory that is needed to be held in storage (on hand) and that should be purchased (on order). This information is used to develop the Air Force’s budget stratification report. The stratification report shows the amount of inventory needed to meet operating requirements. When the total of on-hand and on-order inventory falls to or below a certain level—called the reorder point—inventory managers place orders for additional inventory to prevent out-of-stock situations from occurring. The Air Force refers to its inventory managers as item management specialists. Generally, item management specialists order the amount of inventory needed to satisfy the reorder point requirement. Depending on the item, the reorder point may include requirements for one or more of the following: war reserves that are authorized to be purchased, customer-requisitioned materiel that has not been shipped (also known as a safety level to be on hand in case of minor interruptions in the resupply process or unpredictable fluctuations in demand, minimum quantities for essential items for which demand is not normally predicted (also referred to as numeric stockage objective or insurance items), inventory to satisfy demands while broken items are being repaired (also referred to as repair cycle stock), inventory to satisfy demands during the period between when the need to replenish an item through a purchase is identified and when a contract is awarded (also referred to as administrative lead time), and inventory to satisfy demands during the period between when a contract for inventory is awarded and when the inventory is received (also referred to as production lead time). We define the Air Force’s current year’s operating requirements as requirements for war reserves, stock due-outs (backorders), safety levels, numeric stockage objective (a form of safety stock), and repair cycle. Hereafter, these requirements will be referred to as on-hand requirements. On-hand inventory is used to satisfy these on-hand requirements. On-order inventory is the amount of inventory for which contracts have been awarded or funds have been committed by the Air Force to satisfy any shortfall to its on-hand requirements and its administrative and production lead time requirements. Hereafter, these requirements will be referred to as on-order requirements. When there is not enough inventory to meet on- hand and on-order requirements, this is defined as an inventory shortage. More than half of the Air Force’s on-order and on-hand secondary inventory, worth an average of $31.4 billion, was not needed to support its requirements from fiscal years 2002 through 2005, although increases in demand have contributed to a slight reduction in the percentage of this on- hand inventory and a reduction in the number of years of supply this inventory represents. Our analysis shows that the value and the percentage of the Air Force’s inventory not needed to support its on-order requirements increased by about $0.3 billion and 7.8 percent, respectively, representing an average of 52 percent of its on-order inventory. Additionally, we found that the percentage of the Air Force’s inventory not needed to support its on-hand requirements was reduced by 2.7 percent, due, in part, to increases in the demand for the items. However, this inventory represents an average of about 65 percent (about $18.7 billion) of the value of unneeded on-hand inventory. While increasing demands have resulted in the Air Force reducing the number of years of supply this inventory represents, 79 percent of the Air Force’s inventory items not needed to support requirements had no recurring demands at all, resulting in a potentially infinite supply of those items. We found that the Air Force’s secondary inventory not needed to support on-order and on-hand requirements can be attributed to many of the long-standing and systemic inventory management problems that we have identified in our prior reports in 1997 and 2000, such as decreasing demands or demands not materializing at all, retaining items used to support aging weapon systems that have diminishing sources of supply or are being phased out of service, retaining items that may be used to support new weapon systems, and not terminating eligible contracts for on-order items. Based on our analyses, we found that the Air Force experienced an increase in the amount and percentage of on-order inventory not needed to support its on-order requirements from the end of fiscal year 2002 through the end of fiscal year 2005. The value and percentage of the Air Force’s unneeded on-order inventory increased by about $0.3 billion and 7.8 percent, respectively. Although DOD’s supply chain management regulation provides guidance for developing materiel requirements based on customer expectations while minimizing inventories, over the 4-year period an average of 52 percent ($1.3 billion) of the Air Force’s on-order inventory was not needed. Examples of unneeded on-order inventory include jet engines, landing gear components, electrical and communication equipment, guided missile components, aircraft hydraulic and de-icing system components, and other aircraft components. This $1.3 billion in on-order inventory not needed to support requirements indicates that the Air Force did not cancel orders or deobligate funds for items that were not needed to support requirements. Furthermore, based on the Air Force’s fiscal year 2005 stratification report, the Air Force marked for disposal approximately $300 million of its on-order inventory that is not needed to support requirements. This means that as soon as these on- order items are delivered, they could be disposed of. Table 2 shows the amount of unneeded inventory the Air Force had on order at the end of fiscal year 2002 through the end of fiscal year 2005. At the end of fiscal year 2005, the Air Force had 2,157 unique items (with a quantity of 788,515 individual parts) valued at $1.1 billion with inventory on order that was not needed to support requirements. Of these 2,157 items, there were 1,192 unique items (with a quantity of 723,147 individual parts) that had unneeded inventory both on order and on hand. These items represented approximately 74 percent, or about $0.8 billion of the total $1.1 billion of Air Force’s on-order items that were not needed to support requirements. Appendix II contains a list of the top 10 types of items, identified by the federal supply class, with the highest value of unneeded items on order as of September 30, 2005. The Air Force has not been effective in reducing the amount of its unneeded inventory on order, with an average of $1.3 billion of its on- order inventory over the past 4 years not being needed to support requirements. The Air Force has continued to purchase this unneeded on- order inventory because its policies do not provide incentives to reduce the amount of inventory on order that is not needed to support requirements. Instead, the Air Force has revised its policies to make it easier to purchase inventory that is not needed to support requirements. For example, in June 2006 the Air Force Materiel Command announced a change in its policy for reviewing contract termination actions valued at $1 million or less to require each air logistics center to review at least 80 percent of the center’s total computed termination value, with priority given to those terminations with the highest dollar value. Under its prior policy, all such orders were required to be reviewed for potential contract termination. We did not evaluate this new policy to determine the overall impact that it would have on purchasing items not needed to support requirements because this policy was not in effect during our review period, but it appears that this new policy will exacerbate the problem. Until the Air Force policy provides incentives, such as requiring contract termination review for all unneeded on-order inventory or reducing the amount of funds available for the Air Force Materiel Command by an amount up to the value of the Air Force’s on-order inventory that is not needed to support requirements, the Air Force is likely to continue to experience its long-standing problems with having on-order inventory that is not needed to support requirements. In our discussions with Air Force Materiel Command officials, they disagreed with our assertion that they do not have incentives to assist them in reducing the amount of on-order inventory that is not needed to support requirements. According to an Air Force Materiel Management Command official, the Air Force has a plan to create a new data system to improve the process for identifying on-order inventory that should be terminated. However, this official stated that there is not yet a designated amount of funding in place to finance the initiative; thus it is unclear when this plan would be implemented. Although higher demands helped the Air Force slightly reduce the percentage of its on-hand inventory not needed to support requirements during fiscal year 2002 through fiscal year 2005, more than half of its on- hand inventory was unneeded. Our analysis shows that between September 30, 2002, and September 30, 2005, the percentage of the Air Force’s unneeded on-hand inventory was reduced by 2.7 percent, due, in part, to increases in the demand for the items, although the value of this unneeded inventory remained the same. Despite this reduction, an average of about 65 percent ($18.7 billion) of the value of the Air Force’s on-hand inventory was not needed to support requirements. Examples of unneeded on-hand inventory include jet engines, electrical and communication equipment, radar equipment, guided missile components and subsystems, aircraft gun fire control components, and other aircraft components. Table 3 shows the amount of unneeded inventory the Air Force had on hand from the end of fiscal year 2002 through the end of fiscal year 2005. At the end of fiscal year 2005, the Air Force had 87,480 unique items (with a quantity of 5,776,442 individual parts) valued at $18.7 billion with inventory on hand that was not needed to support requirements. Of these 87,480 items, there were 1,192 unique items (with a quantity of 775,791 individual parts) that had unneeded inventory both on order and on hand. These items represented approximately 4 percent, or about $0.8 billion of the total $18.7 billion of Air Force’s on-hand items that were not needed to support requirements. Appendix III contains a list of the top 10 types of items, identified by the federal supply class, with the highest value of unneeded items as of September 30, 2005. Having on-hand inventory that is not needed to support requirements increases overall storage costs for the Air Force. According to Air Force officials, the cost to store this inventory is small compared to the cost to dispose of and then later repurchase these items if they are needed. However, we calculated as of September 30, 2005, that it cost the Air Force at least $15 million annually to store its useable inventory not needed to support on-hand requirements. In addition, depending on the location where repairable broken items are stored, it could cost up to an additional $15 million to store unneeded inventory items that have not been repaired. If the Air Force did not have this unneeded inventory, it might be in a better position to reduce its warehousing infrastructure and associated costs. Moreover, the $18.7 billion in on-hand inventory not needed to support requirements indicates that the Air Force may not have canceled orders for items that were not needed or may have tied up funds that could have been obligated for other needed items. Of the Air Force’s on-order and on-hand inventory not needed to support requirements, 79 percent had no recurring demands at all, resulting in a potentially infinite supply of those items. Examples of unneeded inventory with no recurring demands include jet engines, electrical hardware, guided missiles, fusing and firing devices, and airframe and other aircraft components. The Air Force has continued to retain this unneeded inventory with no recurring demands, in part, because the Air Force has not performed a comprehensive assessment of its on-hand inventory items that are not needed to support requirements and that have no recurring demands and revalidated the need to continue to retain these items. In our discussions with Air Force Materiel Command officials, they disagreed with our assertion that they should conduct a comprehensive assessment to determine whether to retain this unneeded inventory. According to an Air Force Materiel Command official, the Air Force’s quarterly requirements computation process is a valid assessment for determining the amount of inventory needed to satisfy its requirements. However, this process does not provide a comprehensive assessment on whether to retain inventory items not needed to satisfy requirements. Instead, the requirements computation process determines the amount of inventory needed to be on hand and on order to satisfy current and future requirements and identifies the amount of inventory that is above those requirements. An Air Force Materiel Command official also stated that the Air Force provides item management specialists with the necessary guidance for retaining assets that are not needed to support requirements and it conducts an annual assessment of the inventory items that are being retained. The official commented that although these assets may show no current demands, there may be future demands for the items, thus the Air Force retains them for possible future use. However, given that we found that 79 percent of the Air Force’s on-order and on-hand inventory not needed to satisfy its current requirements are items that have no recurring demands, resulting in a potentially infinite supply of those items, we continue to believe that a comprehensive assessment is needed to determine which and how many of these items should be retained. For the 21 percent of Air Force inventory not needed to support requirements that had projected recurring demands, we found that the demand for these items slightly increased, thereby improving the likelihood that these items will be used. For example, in fiscal year 2005, 82 percent of the unneeded items with projected recurring demands were projected to be used within a period of 10 years or less; whereas in fiscal year 2002, only 79 percent of the items were projected to be used. Figure 1 shows a comparison of the number of Air Force unneeded on-hand and on-order inventory items stratified by years of supply for fiscal years 2002 and 2005. On the basis of number of items and value, in fiscal year 2002 and fiscal year 2005 the largest category of Air Force secondary inventory not needed to support requirements was “2 to 10 years of supply.” At the end of fiscal year 2005, there were 6,361 unique items valued at about $4.2 billion within this category. The value of the items was largest of all of the years-of-supply categories, representing about 32 percent of the total value of the supply years stratified. We also found that the amount of inventory in the most current years of supply improved from 2002 to 2005. In fiscal year 2005, about 31 percent of the items with projected recurring demands had an anticipated supply of less than 1 year. This is about a 4 percent increase from the percentage for fiscal year 2002, which was about 27 percent. Responses from Air Force item management specialists and our analysis of the Air Force’s inventory data identified a variety of reasons for maintaining on-order and on-hand inventory not needed to support current requirements, such as decreasing demands, retaining items used to support aging weapon systems that have diminishing sources of supply or are being phased out of service, retaining items to support new weapon systems, and not terminating eligible contracts for on-order items not needed to support requirements. We conducted a survey of selected Air Force inventory items, which identified a variety of reasons for having items not needed to support their inventory requirements. Table 4 summarizes the estimated frequency of reasons for having unneeded on-order and on-hand inventory as reported in our survey results. Based on our sample, decreases in demands and changes in implementation schedules for inventory replacement were the most frequent reasons specifically cited for on-order inventory not needed to support requirements. Decreases in demand and weapon systems being phased out were the most frequent reasons identified for unneeded on- hand inventory. Specific examples and more detailed discussion of some of these reasons appear in the subsections that follow. For more details on our item selection and survey methodology, refer to appendix I. Many of these reasons are long-standing and systemic inventory management problems that we have identified in our prior reports. Since early 1990, when we began reporting on this issue, decreases in demand, obsolescence, and data input errors were some of the reasons given for DOD’s excess inventory. Additionally, on the basis of our March 2007 report reviewing DOD’s administrative and production lead time requirements, we found that inaccurate forecasting of these requirements led to early delivery of items valued at approximately $2 billion—of which the Air Force represented $0.3 billion—resulting in having additional inventory on hand that was not needed to support requirements. Based on our survey, we estimate that demand decreasing or not materializing at all account for 29 percent of items with on-order inventory not needed to support requirements and 37 percent of items that had on- hand inventory not needed to support requirements. We estimate that decreases in demand were a factor in at least $0.97 billion of the unneeded on-hand Air Force inventory. Moreover, since 1997, DOD’s data have shown that demand decreasing or not materializing at all were the primary reasons for having on-order and on-hand inventory not needed to support requirements. Demand includes both recurring and nonrecurring demands. A one-time event, such as the initial upgrading of selected parts of a weapon system, is considered to be a nonrecurring demand. In our 1997 report, a decrease in demand or demand not materializing was also the primary reason for DOD having unneeded on-order and on-hand inventory, representing 24 percent and 11 percent, respectively, for the items sampled. Similarly, in 2000, we reported that while DOD inventory managers made inventory purchases that were supported by requirements at the time they were contracted, subsequent requirement decreases resulted in the purchases being in excess of requirements. During our analysis, Air Force officials acknowledged that they are aware that decreases in demand have resulted in having more inventory than is needed to support requirements; however, the Air Force has not evaluated why it continues to experience these decreases in demand or taken actions to mitigate the effect of these changes. Until the Air Force evaluates why it continues to have long-standing changes in demand, it will continue to have on-order and on-hand inventory that is not needed to support requirements, which may result in unnecessary increased storage costs and obligation of funds earlier than necessary. In addition, until the Air Force evaluates these decreases in demand, it will be unable to effectively take necessary management actions to reduce unneeded on- hand and on-order inventory. Many of the Air Force’s inventory items not needed to satisfy requirements are items used to support aging weapon systems that have diminishing sources of supply or are being phased out of service. Based on our sample, we estimate that 18 percent of unneeded on-hand inventory items are in this category. According to Air Force policy, items not needed to satisfy requirements may be retained by inventory management specialists if the items supporting older weapon systems can no longer be procured. Additionally, DOD’s Supply Chain Materiel Management Regulation states that the Air Force is required to review and validate, at least once annually, the methodology used in deciding to retain these items. In responding to our surveys, many item management specialists cited various Air Force memoranda that contain the justification for retaining items that support aging weapon systems, such as the B-52 and the A/OA- 10. For example, according to the retention memo for B-52 assets, the rationale for taking a conservative approach when disposing of excess inventory items is to counter routine difficulties in obtaining assets needed to meet requirements due to diminishing manufacturing sources and the increasing cost of reprocuring these items should demand arise after disposal of on-hand assets occurs. The projected life of the B-52 is expected to last until the year 2040. According to an Air Force memorandum, unless an item or system supporting the B-52 is replaced, most of these inventory items will be required at some point during the weapon system’s projected life. Similar reasons were given for retaining the A-10 assets. However, item management officials for the A-10 have requested that all assets supporting this weapon system—many of which currently have little or no usage—be retained for the projected life of the weapon system, which is the year 2028. Based on our sample, we estimate that there is at least $24 million worth of inventory on hand that supports the A-10 and the B-52 weapons systems. Although the actual usage rates may be small, given the length of time these systems will continue to be in service, without establishing some baseline requirements for the items supporting these systems, the Air Force will continue to have large quantities of inventory on hand that appear not to be needed to support requirements, even though the Air Force projects that these items may be needed in the future to support these weapon systems. The Air Force is retaining some inventory items because they potentially may be used to support new weapon systems. In June 2005, the Air Force Materiel Management Division directed that all parts for the F-16 aircraft weapon system be retained for a period of at least 1 year until the Air Combat Command completes an analysis of alternatives on the next generation replacement for the QF-4 aircraft weapon system. In July 2006, this retention policy was extended until the analysis of alternatives is completed in 2007 and a decision is made. Currently, the F-16 is a leading candidate for replacing the QF-4 aircraft that will be phased out of service; thus, the future requirements for assets supporting the F-16 are unknown at this time. As a result, the Air Force is retaining all F-16 assets because they may be used to support the new weapon system. According to Air Force officials, they are using the lessons learned from the QF-4 program, where they had documented cases of repurchasing previously owned Air Force inventory from salvage contractors, usually at very high prices. Based on our sample, we estimate that 10 percent of items on hand that are not needed to meet current requirements are used to support the F-16 aircraft weapon system. Some of the Air Force’s on-order items not needed to support requirements remain on order because the contracts for these items have not been terminated. The Air Force defines items on order that are in excess of their requirements objective as termination quantities, which should be considered for contract cancellation under Air Force policy. As of September 30, 2005, the Air Force had 789 unique items (about 115,000 individual parts), valued at about $261 million, that should have been considered for contract termination. However, based on our sample, we estimate that only 5 percent of the contracts for items that should have been considered for termination actually were terminated or reduced. Item management specialists reported that contracts were not cancelled or the quantity on contract was not reduced due to a variety of reasons that include: items were delivered before the termination quantities were identified, items were delivered before termination actions were taken, contract termination model results showed that it was not economically feasible to terminate contracts, items were purchased as government furnished equipment to support contractor repair, data errors resulted in inaccurately identifying contracts for termination, and manpower constraints resulted in the issuance of an interim policy directing that no contracts valued at $1 million or less be terminated. For these items, we did not determine whether the Air Force ran the termination model in a timely manner to determine the feasibility of canceling the orders or bringing the items into inventory, nor did we determine whether the Air Force responded to the model’s recommendations in a timely manner. One frequent reason noted for lack of action to terminate or reduce a contract was an interim policy instituted from March 2005 through June 2006 at the Oklahoma City Air Logistics Center, directing that no termination actions be taken for items valued at $1 million or less. For these items, item management specialists also were not required to perform the contract cancellation computation to determine if it was economically feasible to terminate these contracts. According to Oklahoma City Air Logistics Center officials, this revised termination policy was instituted because of a decrease in the manpower needed to accurately and completely process these items with potential excess inventory due to mandatory training requirements. For the total number of items that we computed to be on-order inventory not needed to meet requirements as of September 30, 2005, this policy resulted in the acquisition of about 77 percent of the Oklahoma Air Logistics Center’s inventory, valued at $123 million, which was not supported by requirements. Although more than half of its secondary inventory was not needed to support requirements, the Air Force still had shortages of certain items in inventory. Between September 30, 2002, and September 30, 2005, the percentage and value of the Air Force’s inventory shortages remained the same—at about 8 percent and $1.2 billion of its inventory required—while it maintained about $20.0 billion for items on order and on hand that were not needed to support requirements. In fiscal year 2005, the Air Force experienced shortages of about $1.2 billion for some 7,866 unique items (with a quantity of 371,961 individual parts), which may negatively affect readiness. Table 5 summarizes the value of the Air Force’s inventory shortages during this 4-year period. The reasons cited by the Air Force item management specialists for their inventory shortages varied. Table 6 summarizes the estimated frequency of reasons for why these items did not meet overall inventory requirements. For more details on our item selection and survey methodology, see to appendix I. The most frequent reasons identified by item management specialists in the sample were “other” and “no shortages reported.” The specific reasons most frequently cited for shortages were lost or delayed repair capability, increases in demand, and data errors. For example, lost or delayed repair capability was a reason cited for a shortage with the fuel pump for a jet engine and with the electronic circuit card. Additionally, shortages for a transistor and a dual-level valve in fiscal year 2005 were attributed to increases in demand. In our previous work, we have similarly reported that increases in demand, the use of substitute items, and weapon systems upgrades or modifications have been reasons for inventory shortages. The nation faces an increasingly fiscally constrained environment where it is imperative that the Air Force exercise good stewardship over the billions of dollars invested in its inventory. At a time when the Air Force is making personnel reductions due to fiscal challenges, its ineffective and inefficient inventory management practices hinder its ability to efficiently and effectively allocate its resources. On average, from fiscal year 2002 through fiscal year 2005, the Air Force experienced shortages for some required items, valued at about $1.2 billion, which may have negatively affected readiness. However, during this same period, the Air Force maintained about $20 billion worth of items both on order and on hand that were not needed to support requirements. When the Air Force buys unneeded items, it is obligating funds unnecessarily, which could lead to not having sufficient funds to purchase needed items, which also may negatively affect readiness. Correcting these problems would make more funds available that could then be used to purchase items needed to reduce the Air Force’s inventory shortages or meet other Air Force requirements. Without modifying its policies to provide incentives to reduce the amount of inventory on order that is not needed to support requirements or conducting a comprehensive assessment to validate the need to retain unneeded on-hand inventory that does not have recurring demands, the Air Force will continue its past practices of purchasing and retaining items that it does not need and then spending additional resources to handle and store these items. Absent establishing ongoing requirements for items to support weapon systems that have lengthy projected life spans, the spare parts used in these systems will appear to be unneeded even though the Air Force plans to retain these items and expects that these items will be needed over the life span of the system. Moreover, although inventory requirements change as a result of changes in the national threat levels and missions, continuing decreases in demand have caused more inventory to be on hand than is needed to support requirements. Until the Air Force evaluates why it continues to have long- standing decreases in demand, it will continue to maintain inventory that is not needed to support requirements, which may result in unnecessary increased storage costs. To meet customer expectations while minimizing inventory and to reduce the Air Force’s inventory not needed to support requirements, we are recommending that the Secretary of Defense direct the Secretary of the Air Force to take the following four actions: modify its policies to provide incentives to reduce purchases of on-order inventory that are not needed to support requirements, such as requiring contract termination review for all unneeded on-order inventory or reducing the funding available for the Air Force Materiel Command by an amount up to the value of the Air Force’s on-order inventory that is not needed to support requirements; conduct a comprehensive assessment of the inventory items on hand that are not needed to support requirements and that have no recurring demands and revalidate the need to continue to retain these items, and, as part of this assessment, consider establishing ongoing requirements for items supporting weapon systems that have lengthy projected life spans; evaluate the reasons why the Air Force continually experiences decreases in demands which have contributed to having more than half of its inventory on hand not needed to support requirements; and after evaluating the reasons for the decreases in demand, determine what actions are needed to address these decreases and then take steps to implement these actions. In written comments on a draft of this report (reprinted in app. IV), DOD concurred with three of our recommendations and partially concurred with one. DOD cited specific actions it plans to take to implement the four recommendations and specified implementation timelines for each recommendation. We do not believe that DOD’s planned actions are fully responsive to two of our recommendations. Our evaluation of DOD’s planned actions is discussed in detail below. DOD partially concurred with our recommendation for the Air Force to modify its policies to provide incentives to reduce purchases of on-order inventory that are not needed to support requirements. While DOD agreed that opportunities exist to reduce Air Force on-order inventory by ensuring that on-order material above the reorder point is properly reviewed and that measures are put in place to ensure Air Force inventory management specialists are following excess on-order termination procedures, it did not agree that a change or modification to the Air Force’s policy was required to accomplish this task, as we recommended. DOD said that the Air Force plans to address this issue by enforcing existing policy and by placing an increased focus on excess on-order measures. However, DOD did not explain these measures or what steps it will take to ensure that they are effectively implemented. DOD plans to provide a status update on the implementation of this recommendation by the end of September 2007. While we believe the actions cited by DOD are a step in the right direction, we do not believe that these planned actions are fully responsive to our recommendation. In this report we found that the Air Force has continued to not terminate contracts for unneeded on- order inventory because its policies do not provide incentives to reduce the amount of inventory on order that is not needed to support requirements. For example, as we stated in our report, in June 2006 the Air Force revised its policy for reviewing contract termination actions valued at $1 million or less, which makes it easier to purchase inventory that is not needed to support requirements. This new policy requires each air logistics center to review at least 80 percent of the center’s total computed termination value, with priority given to those terminations with the highest dollar value. Under its prior policy, all such orders were required to be reviewed for potential contract termination. As a result, this revised policy will require fewer on-order inventory items to be reviewed for potential contract termination. Given that we found more than half of the Air Force’s on-order inventory was not needed to support on-order requirements at a time when the old policy requiring review of all orders was in effect, we believe that this new policy will exacerbate the problem. Thus, we continue to believe that the Air Force needs to modify its current policy to provide incentives to reduce purchases of on-order inventory as we recommended. DOD concurred with our second recommendation to conduct a comprehensive assessment of unneeded on-hand inventory, stating that it agreed that opportunities exist to reduce Air Force on-hand inventory for items that are not needed to support requirements and have no recurring demands and that the need to continue to retain these items should be validated. DOD stated that the Air Force will review its current stockage retention policy and take actions necessary to reduce the inventory as required. DOD also stated that the Air Force will conduct annual reviews of all inventory items as directed by DOD’s Supply Chain Management policy. DOD plans to provide a status update on the implementation of this recommendation by the end of September 2007. DOD also commented that no further guidance was needed. While we recognize that some of this inventory should be retained for economic or contingency reasons, we believe that added scrutiny should be applied to the Air Force’s review of its stockage retention policy to ensure that it is not retaining assets that are not needed to support current and future operational needs. Based on our work, we believe that the Air Force has a tremendous potential for reducing its inventory because much of the inventory has no projected recurring demands, meaning that it is unlikely that this inventory will ever be used. In other cases, inventories may not be needed because many years of supply are on hand. DOD’s planned actions are a step in the right direction; however, unless and until the Air Force makes appropriate adjustments to its inventory retention levels, there are no assurances that significant improvements will be made to reduce the Air Force’s on-hand inventory not needed to support requirements. In responding to our second recommendation, DOD did not address the portion of the recommendation directing the Air Force to consider establishing requirements for items that support weapon systems that have lengthy projected life spans. Without establishing requirements for items that the Air Force wants to retain for future use, it will be difficult to determine what portion of its inventory that is in excess of its requirements is valid to retain. For example, as stated in our report, many of the items supporting the A-10 and B-52 weapon systems have minimal usage rates, but they are being procured today to prevent difficulties in obtaining these assets in the future due to diminishing manufacturing sources. These weapon systems have projected life spans that could last until the year 2028 and 2040, respectively. Given the length of time these systems will continue to be in service, the Air Force needs to establish some baseline requirements for the items supporting these systems; otherwise, the Air Force will continue to have large quantities of inventory on hand that appear not to be needed to support requirements, even though these items may be needed in the future to support these weapon systems. Thus, we continue to believe that our recommendation is valid and DOD should consider establishing requirements for these items. DOD concurred with our third recommendation to evaluate the reasons why the Air Force continually experiences decreases in demands, which have contributed to having more than half of its inventory on hand not needed to support requirements. DOD agreed that the Air Force experiences changes in demand levels and stated that these changes can be attributed to changes in Air Force missions, reliability and technology improvements, and modifications of inventory items. DOD stated that the Air Force plans to review the computation forecasting model and make any changes required to help ensure future requirements reflect actual demands. DOD plans to provide a status update on the implementation of this recommendation by the end of September 2007. We believe that these actions are generally responsive to our recommendation. In responding to this recommendation, DOD also stated that our finding that more than half of the Air Force inventory on hand is not needed to support requirements is inaccurate. DOD has consistently disagreed with our definition of inventory not needed to support requirements because it differs from the definition that DOD uses for budgeting purposes. DOD policy identifies inventory not needed to support requirements based on current requirements and requirements that are projected through the end of a 2-year budget period. For our work, we only analyzed the Air Force’s inventory that is needed to support current requirements. We do not believe that the projected requirements for the 2-year budget period should be considered in determining the amount of inventory needed to support current requirements. As stated in our report, if the Air Force did not have enough inventory on hand or on order to satisfy the projected requirements for the 2-year budget period, the requirements determination process would not result in additional inventory being purchased to satisfy these requirements. As a result, based on our analysis, we found that more than half of the Air Force’s on-hand and on-order inventory is not needed to support requirements. We continue to believe that our characterization of the Air Force inventory is reasonable, because it reflects the amount of inventory needed to be on hand and on order to support current requirements. Finally, DOD fully concurred with our fourth recommendation to determine what actions are needed to address the decreases in demand and then take steps to implement these actions. DOD stated that the Air Force incorporates requirement changes, resulting in decreased demands, into the computation forecasting model as soon as those changes are known. However, DOD acknowledged that the key is to define the changes soon enough to prevent or terminate buys which may not be needed. DOD stated that the Air Force will monitor the goals, actions, and deliverables as a part of the Air Force computation forecasting model review. DOD plans to provide a status update on the implementation of this recommendation by the end of September 2007. We believe these actions will adequately address our recommendation. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Secretary of the Air Force; the Director, Defense Logistics Agency; the Under Secretary of Defense for Acquisition, Technology, and Logistics; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov/. If you or your staff have any questions concerning this report, please contact me on (202) 512-8365 or solisw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To assess data used in this report, we obtained the Air Force’s Central Secondary Item Stratification Budget Summary and item-specific reports for fiscal years 2002 through 2005. The stratification reports serve as a budget request preparation tool and a mechanism for military officials to review funding. Specifically, the Air Force uses this inventory stratification process to develop inventory budgets, show why inventory is held, and identify assets that are either on hand or on order as of the stratification date. Our analysis was based on evaluating the Air Force’s item stratifications within the opening position table of the Central Secondary Item Stratification Reports. To validate the data in the budget stratification reports, we generated summary reports using electronic data and verified our totals against the summary stratification reports obtained from the Air Force. The Air Force secondary inventory data are identified by unique stock numbers for each spare part, such as an engine for a particular aircraft, which we refer to as unique items. The Air Force may have in its inventory multiple quantities of each unique item, which we refer to as individual parts. We calculated the value of each unique item by multiplying the quantity of the item’s individual parts by the item’s unit price, which is the latest acquisition cost for the item. We computed total values for all items collectively in the inventory and the stratification tables were recreated. This computation approach is consistent with the Department of Defense’s (DOD) process for valuing assets in its annual Supply System Inventory Report. In cases where we found discrepancies in our dataset because of one or more items being reported in the stratification, we identified the excess item and removed it from the dataset. After assessing the Air Force data, we determined that the data were sufficiently reliable for the purposes of our analysis and findings. Upon completion of the data validation process, we revalued the Air Force’s secondary inventory items identified in its budget stratification summary reports because these reports value useable items and items in need of repair at the same rate, and do not take into account the repair cost for repairable broken items. We computed the new value for items in need of repair by subtracting repair costs from the unit price for each item. In cases where the repair cost was greater than the unit price, we obtained new calculations from the Air Force for revaluing these assets. To determine the causes for the $1.2 billion increase in the Air Force’s secondary item inventory levels between fiscal years 2002 and 2005, we analyzed the inventory to determine if the increase was due to changes in the inventory’s value, changes in the quantity of items in inventory, new items added to the inventory, or obsolete items removed from the inventory. We excluded requirements for administrative and production lead time from the Air Force’s on-hand requirements. However, DOD’s practice has always been to use administrative and production lead time requirements to justify the amount of inventory it had on hand. We do not agree with this practice to use lead time requirements to justify on-hand inventory because based on DOD’s material management regulations, acquisition lead time quantities are not required to be on hand. Acquisition lead time is the sum of administrative and production lead times. However, we do agree with DOD that excess on-hand inventory should be used to offset or satisfy requirements for lead time because it would reduce the amount of inventory that needs to be on order. In commenting on our past reports, DOD and the Air Force have disagreed with our definition of inventory that was not needed to satisfy current operating requirements because it differs from the definition that is used for the inventory budget process. We consider the Air Force to have unneeded on-order or on-hand inventory if it has more inventory than is needed to satisfy its requirements based on the Opening Position table of the Air Force’s budget stratification report. However, if the Air Force has more inventory on order or on hand than is needed to satisfy its requirements, it does not consider the inventory beyond the requirements to be unneeded. Instead, the Air Force uses the on-order inventory that is beyond its on-order requirements to satisfy future demands over a 2-year period and contingency retention requirements. Similarly, when the Air Force has on-hand inventory that is beyond its on-hand requirements, it uses the inventory to satisfy future demands over a 2-year period, lead time requirements, economic retention requirements, and contingency retention requirements. Only after applying inventory to satisfy these additional requirements would the Air Force consider that it has more inventory than is needed and would consider this inventory for potential reutilization or disposal. We do not agree with the Air Force’s practice of not identifying inventory used to satisfy these additional requirements as excess because it overstates the amount of inventory needed to be on hand or on order by billions of dollars. The Air Force’s requirements determination process does not consider these additional requirements (except for on-hand inventory needed to meet lead time requirements) when it calculates the amount of inventory needed to be on hand or on order. If the Air Force did not have enough inventory on hand or on order to satisfy these additional requirements, the requirements determination process would not result in additional inventory being purchased to satisfy these requirements. Tables 7 and 8 show a comparison of our analysis and the Air Force’s stratification results of how on-order and on-hand inventory for fiscal year 2005 is applied to satisfy requirements. To determine the extent to which the Air Force’s on-order and on-hand secondary inventory reflects the amount of inventory needed to support requirements, we reviewed DOD and Air Force inventory management policies, past GAO products on DOD and Air Force inventory management practices for secondary inventory items, and other related documentation. We also compared the Air Force’s current inventory to its current on-order and on-hand operating requirements and computed the amount and value of secondary inventory exceeding or not meeting current operating requirements. To determine the amount and value of the Air Force inventory not needed to support requirements and inventory shortages, we reviewed the Air Force’s summary and item-specific budget stratification reports for fiscal years 2002 through 2005. We subdivided all items into one of four categories: (1) items that only had on-order inventory not needed to support requirements, (2) items that only had on-hand items not needed to support requirements, (3) items that had both on-order and on- hand items not needed to support requirements, or (4) items with inventory shortages. In computing the number and value of on-order items not needed to support requirements, we added the results from category one and the results from the on-order portion of category three to compute the total number of items and value of on-order items not needed to support requirements. Similarly, we added the results from category two and the results from the on-hand portion of category three to compute the total number of items and value of on-hand items not needed to support requirements. Additionally, we calculated the storage costs of the inventory on hand that was not needed to meet requirements. We obtained the storage rates for the three different categories—covered, open, and special—of storage from the Defense Logistic Agency (DLA), which was where the inventory items were held. Then we sent DLA officials a list of the Air Force inventory, and they identified the category of each item. To determine the storage rate, we created a database that calculated the number of items multiplied by the annual storage cost rate and multiplied by the volume per item. To distinguish between the categories of items, the storage rates for useable items and items in need of repair were calculated separately. Additionally, to understand whether the inventory not needed to support requirements had improved in relation to its years of supply, we calculated the number of supply years a given item would have based on its present quantity and demand. To determine the years of supply, we computed the projected years of supply using the projected recurring demand data for items with on-hand and on-order inventory not needed to support requirements. In fiscal years 2002 and 2005, items with projected recurring demands represented about 21 percent of the items with on-order and on- hand inventory not needed to support requirements. The remaining 79 percent of these items had no projected recurring demands, which means that the potential years of supply is infinite. We developed a survey to estimate the frequency of reasons why the Air Force maintained items in inventory that were not needed to support requirements or that did not meet requirements. In the survey, we referred to those items that were not needed to support requirements as “excess” and the items that did not meet requirements as “shortages.” The survey asked general questions about the higher assembly (component parts) and/or weapon systems that the items support, and whether the item is on the Air Force’s mission-critical items list (i.e., Air Force Readiness Driver Program). In addition, we asked survey respondents to identify the reason(s) for the excess or shortage. We provided potential reasons as responses from which they could select based on reasons identified in some of our prior work. Since the list was not exhaustive, we provided a response option of “other, please explain.” Finally, we asked that survey respondents provide copies of any implementation plans, schedules, and initiatives planned or in place to reduce excesses or improve shortages. In addition to an expert technical review of the questionnaire by an independent survey methodologist, we conducted in-depth pretests by item management specialists at the Cryptologic Systems Group located in San Antonio, Texas prior to deployment of the final survey instrument. We revised the questionnaire accordingly based on findings from the pretests. We sent this survey electronically to specific item management specialists in charge of sampled unique items at the Air Force’s Air Logistic Centers. To estimate the frequency of reasons for inventory not needed to meet requirements and inventory shortages, we drew a stratified random probability sample of 335 unique items—230 of these with excess inventory and 105 with inventory shortages—from a study population of 18,676 items—10,810 with inventory not needed to meet requirements and 7,866 with inventory shortages. Based on our analysis of the Air Force stratification data, for fiscal year 2005, there were 88,445 unique items with inventory not needed to meet requirements valued at $19.8 billion. Of these 88,445 items, 10,810 met our criteria to be included in our study population of items not needed to meet requirements. These items were valued at $12.4 billion and represented 12 percent of total unique items and 63 percent of the total dollar value of items not needed to meet requirements. Additionally, based on our analysis of the stratification data, all of the 7,866 unique items with inventory shortages, valued at $1.2 billion, met our criteria to be included in our shortage study population. We selected our sample of items not needed to meet requirements from six strata defined by the criteria described in table 9. Our shortage sample was selected from two strata defined by the criteria described in table 10. The divisions of the population, sample, and respondents across the strata, as well as response rate by stratum, are also shown in tables 9 and 10. We sent 335 electronic surveys—one survey for each item in the sample— to the 230 Air Force item management specialists identified as being responsible for these items. Ultimately, we received 295 responses for the survey, for adjusted response rates of 82 percent for excess items and 88 percent for shortage items. Each sampled item was subsequently weighted in the final analysis to represent all the members of the target population. Because we followed a probability procedure based on random selections, our sample of unique items is only one of a large number of samples that we might have drawn. Because each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results in 95 percent confidence intervals. These are intervals that would contain the actual population values for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. All percentage estimates from our sample have margins of error (that is, widths of confidence intervals) of plus or minus 10 percentage points or less, at the 95 percent confidence level unless otherwise noted. In addition to sampling errors, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or were analyzed can introduce unwanted variability into the survey results. We took steps in the development of the questionnaire, the data collection, and the data analysis to minimize these nonsampling errors. For example, data were collected electronically and exported for analyses, negating data entry error. We also reviewed each survey to identify unusual, incomplete, or inconsistent responses and followed up with item management specialists by telephone to clarify those responses. In addition, we performed computer analyses to identify inconsistencies and other indicators of errors and had a second independent reviewer for the data analysis to further minimize such error. On the basis of information obtained from the Air Force on the reliability of its inventory management systems’ data, and the survey results and our follow-up analysis, we believe that the data used in this report were sufficiently reliable for reporting purposes. In addition to meeting with Air Force officials at the Air Force Materiel Command in Dayton, Ohio, we conducted telephone interviews and e- mailed correspondence to inventory management officials from the three Air Force Air Logistics Centers located in Macon, Georgia; Ogden, Utah; and Oklahoma City, Oklahoma; and the Cryptologic Systems Group located in San Antonio, Texas to obtain answers to these questions. We conducted our work between January 2006 and February 2007 in accordance with generally accepted government auditing standards. In addition to the contact named above, Lawson Gist, Jr., Assistant Director; Renee Brown; Natasha Ewing; Nancy Hess; Catherine Hurley; Jacqueline McColl; Matt Michaels; Steven Pruitt; Minnette Richardson; Terry Richardson; and George Quinn made key contributions to this report.
At a time when U.S. military forces and their equipment are in high demand, effective management of the Department of Defense's (DOD) inventory is critical to ensure that the warfighter has the right items at the right time. The Air Force is the largest contributor to DOD's total on-hand inventory on the basis of inventory value. Under the statutory authority of the Comptroller General to conduct evaluations on his own initiative, GAO determined the extent to which (1) the Air Force's on-order and on-hand inventory reflect the amount of inventory needed to support required inventory levels from fiscal years 2002 through 2005, and (2) the Air Force had shortages in its inventory needed to support required levels during this period. To address these objectives GAO analyzed Air Force secondary inventory data (spare parts such as engines and guided missiles) from fiscal years 2002 through 2005. More than half of the Air Force's secondary inventory (spare parts), worth an average of $31.4 billion, was not needed to support required on-hand and on-order inventory levels from fiscal years 2002 through 2005, although increased demand due to ongoing military operations contributed to slight reductions in the percentage of inventory on hand and the number of years of supply it represents. DOD regulations provide guidance for developing materiel requirements based on customer expectations while minimizing inventories. However, the value of Air Force on-order inventory not needed to support required inventory levels increased by about 7.8 percent, representing an average of 52 percent ($1.3 billion) of its on-order inventory. The Air Force has continued to purchase unneeded on-order inventory because its policies do not provide incentives to reduce the amount of inventory on order that is not needed to support requirements. When the Air Force buys these items it may obligate funds unnecessarily, which could lead to not having sufficient obligation authority to purchase needed items and could negatively impact readiness. In addition, although the percentage of the Air Force on-hand inventory was reduced by 2.7 percent due to increases in demand, about 65 percent ($18.7 billion) of this inventory was not needed to support required inventory levels. GAO calculated that it costs the Air Force from $15 million to $30 million annually to store its unneeded items. Of the Air Force's inventory items not needed to support required inventory levels, 79 percent had no recurring demands (such as engines and airframe components), resulting in a potentially infinite supply of those items. The Air Force has continued to retain this unneeded inventory with no recurring demands, in part, because the Air Force has not performed a comprehensive assessment to revalidate the need to continue to retain these items. For the remaining 21 percent of items that had recurring demands, increasing demands resulted in a reduction in the number of years of supply that this inventory represents, with the largest quantity and value of items having between 2 to 10 years of supply. Inventory not needed to support required inventory levels can be attributed to many long-standing problems, such as decreasing demands, retaining items used to support aging weapon systems that have diminishing sources of supply or are being phased out of service, and not terminating contracts for on-order items. Air Force officials acknowledged that decreases in demand have resulted in having more inventory than is needed; however, the Air Force has not evaluated why it continues to experience decreases in demand or taken actions to mitigate the effect of these changes. Without taking actions to reduce its unneeded inventory, the Air Force will continue its past practices of purchasing and retaining items it does not need and then spending additional resources to handle and store these items. Although more than half of its secondary inventory was not needed to support required levels, the Air Force still had shortages of certain items. From fiscal years 2002 through 2005, the percentage and value of the Air Force's inventory shortages remained the same at about 8 percent and $1.2 billion.
Created in 2010, CDCI was one of the later TARP programs and was intended to help mitigate the adverse impact that the financial crisis was having on communities underserved by traditional banks. CDCI is structured much like the TARP Capital Purchase Program (CPP), in that both provide capital to financial institutions by purchasing preferred equity and subordinated debt from them. However, CDCI differs from CPP in several important ways. First, CDCI provided financial assistance only to CDFIs, which provide financial services to low‐ and moderate‐income, minority, and other underserved communities. Second, CDCI also provided assistance to credit unions, unlike CPP, which provided capital only to banks. Finally, CDCI provided more favorable capital terms to its participants than CPP did. Specifically, CDCI investments have an initial dividend or interest rate of 2 percent, compared with 5 percent under CPP. The dividend or interest rate increases to 9 percent after eight years under CDCI, compared with five years under CPP. Treasury finalized the last of its $570 million in CDCI investments in September 2010, just prior to the expiration of its TARP purchasing authority. The 84 participating institutions included 36 banks and 48 credit unions. Twenty-eight of the 36 banks were former CPP participants that were in good standing in that program and thus were allowed to refinance their CPP shares for a lower rate in CDCI. Of these 28 banks, 10 received additional disbursements under CDCI. As shown in table 1, CDCI terms varied depending on the type of institution receiving the capital. In general, banks received capital by issuing to Treasury preferred stock representing not more than 5 percent of their risk-weighted assets. The capital they received in return was generally treated as tier 1 capital for regulatory purposes, with a perpetual term. Federal banking regulators classify capital as either tier 1—currently the highest-quality form of capital—or tier 2, which is weaker in absorbing losses. Credit unions issued unsecured subordinated debentures totaling not more than 3.5 percent of their total assets. In exchange, Treasury provided participating credit unions with secondary capital that boosted their net worth until 5 years before the maturity date, at which point it would begin amortizing at 20 percent per year. All institutions participating in CDCI are required to make quarterly dividend or interest payments to Treasury. After 8 years, the initial dividend or interest rate of 2 percent increases to 9 percent. As of April 30, 2014, 68 of the original 84 CDCI institutions remained in the program. Fifteen institutions (six banks and nine credit unions) had exited through repayment, while one institution had exited as a result of its subsidiary bank’s failure. Three of the banks and at least one of the credit unions that had exited the program did so when they merged with or were acquired by institutions that were not certified CDFIs. CDCI terms required them to repay their investments, as non-CDFIs were not eligible for the program. Two of the 68 remaining institutions had begun to repay the principal on the investments they had received, while the other remaining institutions had paid only dividends and interest. Repayments and income from dividends and interest to date have amounted to less than a quarter of Treasury’s $570.1 million investment in 2010 (see fig. 1). As of April 30, 2014, the outstanding investment balance for CDCI was $467.4 million, reflecting repayments and write-offs totaling $102.7 million. Specifically, as of this date, Treasury had received approximately $96.0 million in principal repayments from CDCI recipients. As a result of the failure of Premier Bancorp, Inc.’s subsidiary, in January 2013 Treasury wrote off nearly all of its $6.8 million investment in Premier, whose assets were liquidated when its banking subsidiary entered receivership. Treasury also did not collect more than $300,000 in unpaid dividends and interest for Premier. CDCI participants have also paid $38.3 million in dividends and interest. Treasury has lowered its estimates of the program’s lifetime cost over the last 2 years as market conditions have improved and institutions have begun to repay their investments. As of November 2010, Treasury estimated the program’s lifetime cost at about $290 million. As of February 28, 2014, Treasury estimated the program’s lifetime cost at $80 million. According to representatives from the remaining CDCI institutions we spoke to, factors such as access to capital, the benefits of CDCI capital, and CDCI program terms affected participants’ decisions to remain in or exit the program. Specifically, we interviewed a nonprobability sample of 8 banks and 9 credit unions remaining in the program as well as organizations representing the banks and credit unions participating in CDCI. Representatives of CDCI participant institutions and bank and credit union organizations told us that a key factor in participants’ decision to remain in or exit CDCI was their ability to access other sources of capital. Representatives of a few of the CDCI institutions, as well as representatives of two organizations representing CDCIs, said that CDCI had been one of the few sources of external capital for small community banks and credit unions since the financial crisis and economic downturn. Representatives of the same organizations and a few other CDCI institutions explained that for many CDFI banks and credit unions, access to credit could be difficult and expensive. Some bank representatives said they were waiting to see whether the credit market improved closer to 2018 before making a decision on the timing of repayments. Some credit union representatives noted that they would pursue grants or nonmember deposits to replace the CDCI capital. One bank representative, as well as a bank organization representative and an investor in CDFIs, noted that the structure of the CDCI agreement gave Treasury priority status over other investors, making it difficult for these banks to attract additional investors and find replacement capital. This structure would be an issue if a bank was attempting to raise capital in smaller amounts than its CDCI capital, as the bank would have to balance the interests of both Treasury and new investors. Representatives from CDCI institutions we interviewed also mentioned several benefits of maintaining their CDCI capital. First, they stated that CDCI had allowed them to meet customer demand and provide access to services they would otherwise not have been able to provide. For example, three bank representatives we interviewed said the CDCI capital allowed them to purchase bank branches that were struggling or closing in underserved communities. They said that these purchases allowed them to ensure that residents of these communities had access to financial services. Several representatives noted that the CDCI capital allowed them to increase their lending. According to a representative of one credit union, the capital helped fund a loan so that a grocery store in a neighborhood that had previously lacked one could be built. Two bank representatives and a credit union representative also noted that the capital had allowed them to make residential mortgage loans. As one of the representatives noted, other financial institutions were decreasing this type of lending during the financial crisis and economic downturn. Second, several representatives we interviewed noted that the 2 percent rate on the CDCI capital was lower than the rates they could obtain in private capital markets. Therefore, regardless of their current capacity to fully repay Treasury, they planned to keep the capital as long as it remained less expensive than alternative capital. Finally, regulations reflecting Basel III regulatory capital reforms that are scheduled to take effect in 2015 will increase the percentage of capital that banks must hold. Three bank representatives noted that the increased capital requirements would make them more likely to hold their CDCI capital. Representatives from bank and credit union organizations and participating institutions also told us that changing program terms would influence institutions’ decisions about exiting the program. The scheduled increase in the CDCI dividend rate (from 2 percent to 9 percent) that will take effect in 2018 is a key factor for institutions in deciding when to exit. Representatives from several CDCI institutions we interviewed told us that they would like to repay Treasury before the increase takes effect. However, a few of the representatives from remaining institutions who we interviewed were uncertain about their ability to find other sources of inexpensive capital before the increase in 2018. A representative of one of the bank organizations stated that a good number of remaining CDCI banks would likely struggle to pay dividends at the higher rate while maintaining services to their communities. In addition, we found that for credit unions, the treatment of CDCI funds as secondary capital may also affect repayment schedules. Credit unions with 2018 maturity dates on their CDCI securities (approximately half of the remaining credit unions, according to Treasury) have had to begin counting a portion of their capital as debt. Treasury representatives explained that as a result of this regulatory capital rule, it was likely that many of these institutions would pursue repayment before 2018. Three credit union representatives we interviewed whose CDCI securities had a maturity date of 2018 stated that they hoped to either increase their earnings or find alternative sources of secondary capital in time to replace the CDCI capital. Treasury officials stated that they had not yet determined an exit strategy for CDCI but were studying various alternatives and would need to consider the interests of participating institutions and taxpayers. The officials noted that CDCI differed from CPP because of the mission of the participating institutions, which focused on communities and populations lacking access to credit, banking, and other financial services. To date, Treasury has had a number of meetings with participating banks and with organizations representing the banks in the program and said that it was aware that these organizations were also looking at alternatives for the banks’ exit from CDCI. Treasury officials said that they had not yet held similar meetings with organizations representing credit unions but planned to do so. In addition, Treasury officials said that they would meet with the federal financial regulators to discuss options. Treasury officials added that any decision would need to balance the mission of CDCI with the need to protect the taxpayers’ investment. Treasury officials stated that, like CPP, CDCI would wind down as participants (1) repaid their investments; and (2) in some cases, restructured them. While they had not yet determined what approach they would use for CDCI participants that did not follow either of these courses, Treasury officials said that they were exploring a number of options. Treasury has used auctions to sell some of its investments in CPP institutions. According to representatives of one organization we spoke with, some CDCI participants expressed concern that Treasury would also use the auction method for CDCI after announcing its auction strategy for CPP in 2012. These representatives noted that participants’ mission to serve communities that lacked access to financial services might suffer if investors such as hedge funds bought their securities from Treasury, because the investors’ interests would not necessarily align with the institutions’ interests. Similarly, representatives from the same organization told us that several CDCI banks and credit unions were classified as minority depository institutions and that some of these institutions had concerns that auctions could weaken their status as such. The Federal Deposit Insurance Corporation (FDIC) defines a minority depository institution as any federally insured depository institution with 51 percent or more minority ownership of its voting stock, or a majority of the Board of Directors is minority and the community that the institution serves is predominantly minority. NCUA requires a federally insured credit union’s percentage of both minority members and minority management officials to exceed 50 percent for minority depository institution status. Treasury officials told us that they were aware of the issues that auctions could present for CDCI institutions and had not determined whether they would incorporate auctions into their CDCI exit strategy. Further, when concerns first surfaced in 2012, after auctions were announced for CPP, Treasury issued a public statement clarifying that it had not yet determined what exit strategies would be used for CDCI. Treasury officials also told us that a few minority depository institutions had been part of the CPP auction process and that Treasury consulted with FDIC on this matter for the CPP auctions. According to FDIC officials, FDIC was notified of the details of these auctions before they were finalized, and the officials stated that none of the CPP auctions affected the designation of any minority depository institutions. Most CDCI institutions have paid dividends and interest to Treasury on a timely basis, with only a small percentage missing payments over the life of the program. In addition, few of the remaining CDCI banks and credit unions are considered troubled by FDIC or NCUA. Moreover, remaining CDCI banks generally are financially stronger than certified CDFI banks that did not participate in the program, but remaining CDCI credit unions are generally weaker than nonparticipating CDFI credit unions. The number of CDCI institutions with missed quarterly dividend or interest payments has been generally low, representing, on average, about 4 percent of all remaining institutions over the life of the program. The percentage of remaining institutions with missed payments has ranged from about 1 percent to 7 percent (one to six institutions). Since November 2010 (the first quarter that dividend and interest payments were due), nine institutions (seven banks and two credit unions) have missed at least one quarterly payment. Of those institutions, three banks have missed at least eight payments, the threshold at which Treasury has the right to elect directors to their boards. As of April 30, 2014, Treasury had not appointed directors to the board of any CDCI banks, but it had sent an observer to one bank and requested to send an observer to a second bank. Two of the three banks with eight or more missed payments were up to date on their payments as of April 30, 2014, while the third was not. Institutions can elect whether to pay dividends and interest and may choose not to pay for a variety of reasons, including decisions that they or their federal and state regulators make to conserve cash and maintain (or increase) capital levels. However, investors may view a company’s ability to pay dividends as an indicator of its financial strength and may see failure to pay full dividends as a sign of financial weakness. Very few of the remaining CDCI institutions were included on FDIC’s or NCUA’s most recent lists of “problem” or “troubled” banks or credit unions. The designation of these institutions as “problem” or “troubled” is, in large part, derived from the Uniform Financial Institutions Rating System, commonly known as CAMELS. Stated differently, these lists designate institutions with weaknesses that threaten their continued financial viability. Federal and state regulators generally do not allow institutions on these lists to make dividend payments in an effort to preserve their capital and promote safety and soundness. The financial health of remaining banks and credit unions differed relative to institutions that did not participate in CDCI. We examined various measures that described banks’ and credit unions’ capital adequacy, profitability, asset quality, and ability to cover losses. We analyzed quarterly regulatory reports from December 31, 2013 (the most recent reporting period for which data were available for both banks and credit unions) on the 68 institutions remaining in the program as of April 30, 2014. We then compared the data to information on nonparticipating CDFI-certified banks and credit unions. On several measures of financial strength, remaining CDCI banks tended to be financially stronger than certified CDFI banks that did not participate in CDCI (non-CDCI banks) (see table 2). We found that the median asset size of CDCI banks as of December 31, 2013, was nearly three times that of non-CDCI banks ($387.4 million for CDCI banks compared with $143.3 million for non-CDCI banks). While only 1 of the 29 remaining CDCI banks (about 3 percent) had assets of less than $100 million, 32 of the 97 non-CDCI banks (33 percent) had assets in this range. Remaining CDCI banks also had lower median Texas Ratios than non-CDCI banks. The Texas Ratio helps determine a bank’s likelihood of failure by comparing its troubled loans to its capital. The higher the ratio, the more likely the institution is to fail. As of December 31, 2013, remaining CDCI banks had a median Texas Ratio of 29.01, compared with 35.05 for non-CDCI banks. Remaining CDCI banks performed better than non-CDCI banks with regard to profitability and asset quality. Specifically, remaining CDCI banks had a better median return on average assets, a measure of profitability relative to total assets and management’s efficiency at using its assets to generate earnings. As of December 31, 2013, remaining CDCI banks had a median return on average assets of 0.73, compared with 0.40 for non-CDCI banks. While 1 of the 29 remaining CDCI banks (about 3 percent) had a negative return on average assets, 32 of the 97 non-CDCI banks (33 percent) had negative values for this ratio, indicating that the non-CDCI banks had more challenges with regard to managing their assets. In addition, remaining CDCI banks generally held better performing assets than non-CDCI banks. For example, remaining CDCI banks had a lower median percentage of noncurrent loans than non- CDCI banks. As of December 31, 2013, a median of 2.35 percent of loans for remaining CDCI banks were not current, compared with 3.35 percent for non-CDCI banks. However, remaining CDCI banks had a slightly higher median ratio of net charge-offs to average loans than non-CDCI banks (0.35 compared with 0.30). Remaining CDCI banks also held more regulatory capital as a percentage of assets than non-CDCI banks. Regulators require minimum amounts of capital to lessen an institution’s risk of default and improve its ability to sustain operating losses. Regulatory capital can be measured in several ways, but we focused on tier 1 capital, which includes both a tier 1 capital ratio and common equity tier 1 ratio, because it is the most stable form of regulatory capital. The tier 1 risk-based capital ratio shows tier 1 capital as a share of risk-weighted assets; the common equity tier 1 risk-based capital ratio shows tier 1 common equity as a share of total risk-weighted assets. Tier 1 common equity generally does not include CDCI or other TARP funds. Both ratios were higher for the remaining CDCI banks than for non-CDCI banks, suggesting that the CDCI banks were in a somewhat better position to avoid financial losses. As of December 31, 2013, remaining CDCI banks had a median tier 1 capital ratio of 15.88, compared with 13.67 for the non-CDCI banks. The median common equity tier 1 ratio for the remaining CDCI banks was 14.44, compared with 13.50 for non-CDCI banks. Finally, remaining CDCI banks had higher reserves for covering losses compared with non-CDCI banks. Higher reserves suggest that the banks are better positioned to withstand losses. As of December 31, 2013, the median ratio of reserves to nonperforming loans was about one-third greater for remaining CDCI banks as for non-CDCI banks (61.30 compared with 41.68). However, similar percentages of banks in each group (about 14 percent of CDCI banks and about 18 percent of non- CDCI banks) had ratios of reserves to nonperforming loans exceeding 100.00. In other words, these banks had at least one dollar of reserves for every potential dollar of losses on nonperforming loans. CDCI credit unions had lower assets and were generally weaker than nonparticipating certified CDFI credit unions (non-CDCI credit unions) on several measures that regulators commonly use to assess the financial health of these institutions. Specifically, as of December 31, 2013, CDCI credit unions had a median asset size of $19.3 million, compared with $29.9 million for non-CDCI credit unions (see table 3). While the largest CDCI credit union had assets of less than $300 million, 6 of the 126 non- CDCI credit unions had assets exceeding $1.0 billion. Remaining CDCI credit unions were less profitable than non-CDCI credit unions and held slightly more poorly performing assets. As of December 31, 2013, remaining CDCI credit unions also had a median return on average assets of 0.27, compared to 0.53 for non-CDCI credit unions. A greater percentage of the remaining CDCI credit unions than non-CDCI credit unions had a negative return on average assets (about 26 percent, or 10 of 39 CDCI credit unions, compared to about 19 percent, or 24 of 126 non-CDCI credit unions). A negative return on average assets means that the credit union’s earnings did not cover its operating expenses and cost of funds. In addition, remaining CDCI credit unions held more poorly performing assets than non-CDCI credit unions. For example, remaining CDCI credit unions had a delinquent loan ratio of 1.78, compared to 1.43 for non-CDCI credit unions. However, remaining CDCI credit unions had a slightly lower median ratio of net charge-offs to average loans than non- CDCI credit unions (0.54 compared with 0.61), indicating slightly more effective lending and collection practices. Remaining CDCI credit unions also had less capital as a percentage of total assets than non-CDCI credit unions. Specifically, remaining CDCI credit unions had a lower median net worth ratio—net worth as a percentage of total assets—than non-CDCI credit unions. Net worth mitigates fluctuations in earnings, supports growth, and provides protection against insolvency. As of December 31, 2013, CDCI credit unions had a median net worth ratio of 7.35, compared with 9.98 for non- CDCI credit unions. For purposes of capital adequacy, a net worth of 7 percent or greater of total assets is considered well-capitalized. Forty-one percent of the remaining credit unions (16 of 39) had net worth ratios less than 7 percent, while only about 10 percent of non-CDCI credit unions (12 of 126) fell below the 7 percent threshold. Finally, CDCI credit unions were at slightly greater risk of experiencing a decline in net worth from delinquent loans than non-CDCI credit unions. For a credit union, declining net worth is similar to a bank’s having lower reserves for covering losses. As of December 31, 2013, remaining CDCI credit unions had a median ratio of total delinquent loans to net worth of 9.18, compared to 8.30 for non-CDCI credit unions. We provided a draft of this report to Treasury, FDIC, and NCUA for their review and comment. Treasury provided written comments that we have reprinted in appendix II. In its written comments, Treasury concurred with our findings, noting that the report provides constructive insights into Treasury’s efforts to help CDFIs and the communities they serve cope with the effects of the financial crisis. Treasury stated that it continues to explore various exit strategies for its CDCI investments and that any decision would need to balance the mission of CDCI, which focuses on communities and populations that lack access to certain financial services, with the need to protect the taxpayers’ investment. NCUA also provided written comments that we have reprinted in appendix III. NCUA agreed with our representation of credit union information. Treasury, FDIC, and NCUA also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Special Inspector General for TARP, interested congressional committees and members, Treasury, FDIC, and NCUA. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact A. Nicole Clowers at (202) 512-8678 or clowersa@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This report examines (1) the financial status of the Department of the Treasury’s (Treasury) Community Development Capital Initiative (CDCI), including repayments and other proceeds, investments outstanding, and the estimated lifetime cost of the program; (2) factors affecting participants’ decisions to remain in or leave the program and Treasury’s exit strategy; and (3) the financial condition of institutions remaining in CDCI. To assess the financial status of CDCI, we analyzed data from Treasury. In particular, we used Treasury’s April 2014 Monthly Report to Congress and April 2014 Dividends and Interest Report to determine the dollar amounts of principal, dividends, and interest; outstanding investments; the number of remaining and former participants; and the estimated lifetime cost of the program. To examine factors affecting participants’ decisions to remain in or leave the program and Treasury’s exit strategy, we selected and attempted to contact a nonprobability, judgmental sample of 9 of the 29 banks and 12 of the 40 credit unions remaining in CDCI as of March 31, 2014. To draw the sample, we split the remaining institutions into banks and credit unions and divided each list of institutions into four groups according to their total asset size as of September 30, 2013. We selected the largest three banks and the largest credit union. We then randomly ordered the other institutions based on asset size categories and selected the first two to five institutions in each category, depending on the total number of institutions in that category. We selected and attempted to contact 9 banks and 12 credit unions. These samples reflected each institution type’s proportions in the total list of 69 remaining institutions and reflected a variety of geographic areas. We were able to conduct interviews by phone with managers at 8 of the 9 banks and 9 of the 12 credit unions. These interviews consisted of a brief set of questions about each institution’s negative and positive experiences with CDCI, their plans for repayment, and factors affecting those plans. The results of our interviews cannot be generalized to all remaining CDCI banks and credit unions. We also interviewed officials from the National Credit Union Administration (NCUA), associations that represent banks and credit unions that received CDCI capital, and an organization that invests in some of the CDCI banks, to obtain their observations on the same topics. Specifically, we met with representatives of the Community Development Bankers Association, the National Bankers Association, the National Federation of Community Development Credit Unions, the National Association of Federal Credit Unions, and the National Community Investment Fund. Finally, we reviewed Treasury reports and public statements and interviewed Treasury officials to obtain information on CDCI and Treasury’s exit strategy. To assess the financial condition of institutions that received investments under CDCI, we used data from Treasury’s Dividends and Interest reports from November 2010 through February 2014 (the most recent month in which quarterly payments were due) to determine the extent to which participants had missed payments throughout the life of the program. We also obtained from the Federal Deposit Insurance Corporation (FDIC) the number of remaining CDCI banks (as of Feb. 28, 2014) on its problem bank list. Similarly, we obtained information from NCUA on the number of remaining CDCI credit unions it considered “troubled” as of March 13, 2014. In addition, we used SNL Financial (a private service that disseminates data from quarterly regulatory reports, among other information) to obtain regulatory financial data on the 68 remaining CDCI banks and credit unions and on comparison groups of institutions that were eligible for but did not participate in CDCI. To identify the comparison groups, we used Treasury’s CDFI Fund’s list of certified CDFIs as of February 28, 2014. This list included 127 banks, thrifts, and depository institution holding companies, as well as 176 credit unions. We chose to limit our comparison groups to certified CDFIs rather than the universe of banks and credit unions because they shared a community development mission and generally have smaller asset sizes. SNL Financial had data on all of the 127 CDFI banks, thrifts, and depository institution holding companies (“banks”) and 171 of the 176 CDFI credit unions. We divided the bank and credit union lists into three groups each: (1) those remaining in CDCI, (2) those that had exited CDCI, and (3) those that had never participated in CDCI. We defined remaining CDCI institutions as those with their full or partial investments outstanding; this group included the 29 banks and 39 credit unions. For both the bank and credit union analyses, we excluded the institutions that had exited CDCI because of the small size of these groups of institutions. For example, six banks had exited CDCI as of April 30, 2014, but SNL Financial only had current data on three of them because the others had been acquired. Similarly, SNL Financial had current data only on seven of the nine credit unions that had exited as of April 30, 2014. We determined that the median values for these small groups would not provide a meaningful illustration of the financial condition of exited institutions. The final comparison groups included 97 non-CDCI banks and 126 non-CDCI credit unions. We conducted separate analyses for banks and credit unions because the two types of institutions file different regulatory reports and have different financial indicators. For our bank analysis, we used financial measures that were similar to those we had identified in our previous reporting on banks participating in Treasury’s Capital Purchase Program (CPP). These measures help demonstrate an institution’s financial health as it relates to a number of categories, including profitability, asset quality, capital adequacy, and loss coverage. For our credit union analysis, we obtained information from NCUA on the measures it typically uses to assess credit unions’ financial health. We selected at least one measure in each of the four categories (profitability, asset quality, capital adequacy, and loss coverage) we used for the bank analysis. We chose to present median values. We determined that the financial information used in this report, including the CDCI program data from Treasury and the financial data on banks and credit unions from SNL Financial, was sufficiently reliable to assess the status and condition of CDCI and institutions that participated in the program. For the data from Treasury, we tested Treasury’s internal controls over financial reporting as they related to our annual audit of the Troubled Asset Relief Program (TARP) financial statements and found the information to be sufficiently reliable based on the results of our audits of fiscal years 2010 through 2013 financial statements. We assessed the reliability of the SNL Financial data by performing manual testing of required data elements and reviewing existing information about the data and the system that produced them. In addition, we have assessed the reliability of SNL Financial data as part of previous studies and found the data to be reliable for the purposes of our review. We verified that no changes had been made that would affect the data’s reliability. We conducted this performance audit from February 2014 to June 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Kay Kuhlman (Assistant Director), Lisa Reynolds (Analyst-in-Charge), Emily Chalmers, William Chatlos, Chris Forys, Marc Molino, and Patricia Moye made significant contributions to this report.
Treasury established CDCI under the Troubled Asset Relief Program (TARP) in February 2010 to help banks and credit unions certified as Community Development Financial Institutions (CDFI) maintain their services to underserved communities in the aftermath of the 2007-2009 financial crisis. Treasury invested a total of $570 million in 84 eligible institutions by September 2010. TARP's authorizing legislation mandates that GAO report every 60 days on TARP activities, including CDCI. This report examines (1) the financial status of CDCI; (2) factors affecting participants' decisions to remain in or leave the program and Treasury's exit strategy; and (3) the financial condition of institutions remaining in the program. To assess the program's status, GAO reviewed Treasury reports on CDCI. GAO also used regulatory financial data to compare the financial condition of banks and credit unions remaining in CDCI with those that would have been eligible but did not participate. In addition, GAO interviewed staff from Treasury and NCUA, associations representing CDCI participants, and representatives of a nonprobability sample of CDFI banks and credit unions that participated in CDCI. GAO selected banks and credit unions based on asset size and geography. GAO is making no recommendations in this report. In comments on a draft of this report, Treasury and NCUA concurred with our findings. As of April 30, 2014, 82 percent of the Department of the Treasury's (Treasury) $570 million total investment in eligible banks and credit unions through the Community Development Capital Initiative (CDCI) was still outstanding. Sixteen institutions have exited the program, leaving 29 banks and 39 credit unions, respectively, in the program. Treasury had received repayments and investment income of $134.3 million, but has also recorded a $6.7 million write-off based on the failure of one participant's subsidiary. As of February 28, 2014 (most recent information available), Treasury estimated a lifetime cost of $80 million for CDCI, down from an estimated cost of $290 million in November 2010. Representatives of participant banks and credit unions GAO interviewed said that access—or lack thereof—to similar forms of capital was a key factor in institutions' willingness or ability to exit the program. They noted that CDCI continued to be one of the few sources of capital for small banks and credit unions. In addition, they listed program terms, such as the scheduled increase in the dividend or interest rate from 2 percent to 9 percent in 2018, as considerations. Treasury has not yet announced an exit strategy for CDCI but said it will consider the interests of the financial institutions and taxpayers as they consider options for winding down the program. For example, they noted that any strategy would need to take into account how the wind down of the program may impact the community development mission of the remaining participants. The financial health of the remaining CDCI banks and credit unions is mixed. For example, few CDCI institutions have missed their dividend or interest payments to Treasury since 2010. The Federal Deposit Insurance Corporation and National Credit Union Administration (NCUA) had identified very few of the remaining banks and credit unions as exhibiting serious financial, operational, or managerial weaknesses as of March 2014. GAO's analysis of financial data found that banks remaining in CDCI tended to be more profitable, held stronger assets, and had higher capital and reserve levels when compared to non-CDCI banks that were eligible for the program but did not participate. However, remaining credit unions were less profitable, held slightly more poorly performing assets, and had lower capital levels and less protection against losses than non-CDCI credit unions that were eligible for the program but did not participate.
To obtain VHA health care, most veterans submit an application by mail or online to VHA’s Health Eligibility Center (HEC) in Atlanta or in person at a local VHA medical center. Generally, the nature and extent of an individual veteran’s service-connected medical conditions are established through VBA. HEC processes applications and assigns veterans to one of eight priority groups based on their service-connected disabilities; special treatment authorities, such as exposure to Agent Orange or ionizing radiation; and income level. Whether VHA charges a veteran a copayment for medical services it provides is determined, in part, by the veteran’s priority group. For example, veterans in priority group 1 are not required to pay any copayments, and veterans in priority groups 7 and 8 are generally required to pay copayments for all types of medical services. See appendix II for additional details on the enrollment and eligibility process, including priority groups and requirements for copayments. When VHA is notified by VBA of a change in a veteran’s service- connected conditions or disability rating, VHA is responsible for reevaluating the veteran’s priority group status and reviewing the veteran’s account to determine whether any copayment charges that were assessed after the effective date of the VBA award should be canceled or refunded, if applicable, or whether any copayments should be charged. Veterans’ health records are stored in the Computerized Patient Record System (CPRS) application of the Veterans Health Information Services and Technology Architecture (VistA) system. CPRS includes information on veterans’ rated service-connected conditions and special treatment authorities. When a veteran receives a VHA-provided medical service, the provider identifies in CPRS whether the service provided was related to the veteran’s service-connected conditions or provided under a special treatment authority. When medical services provided to a veteran are not otherwise precluded from copayment billing, VistA automatically establishes a copayment charge to the veteran’s account. VistA prevents copayment charges to a veteran when  a provider indicates in CPRS that the medical service provided was related to a veteran’s service-connected conditions or special authority; the medical service provided is one that is exempt from copayments for all veterans, such as preventive screenings, immunizations, and some laboratory services; and  a veteran receives more than one medical service in a single day. If a veteran has third-party medical insurance, VistA puts the copayment charge placed on the veteran’s account on hold for up to 90 days to allow time for VHA to process a claim for reimbursement from the third-party insurer. To the extent that VHA receives third-party reimbursement attributable to the medical service that resulted in a copayment being charged to a veteran’s account, VHA’s policy is to apply the insurance reimbursement to reduce or eliminate the related pending copayment charge. The third-party insurance offset process is a manual process that is to be performed, according to VHA policy, on a daily basis by local medical center staff. See appendix II for additional details on the copayment billing process. According to fiscal year 2010 VA information, VHA provided approximately 576 million medical services to over 5.6 million veterans through VHA’s 21 health care networks composed of 153 medical centers, 768 outpatient clinics, and 134 nursing homes located in all 50 states, the District of Columbia, and territories including Puerto Rico and the Virgin Islands. When VHA facilities are not capable of furnishing economical hospital care or medical services because of geographic inaccessibility or are not capable of furnishing the care or services required, VHA may authorize and pay a non-VHA provider to provide certain veterans hospital care and medical services. When authorized, VHA identified these as fee basis services. Table 1 shows the types, number, and percentage of medical services provided by type— outpatient, prescription, inpatient, extended care, and fee basis. VA is also authorized, by statute to bill certain veterans for medical service copayments (38 U.S.C. § 1710, 1710B, 1722A), and, if applicable, their third-party medical insurance when the medical services VHA provides are not related to the veteran’s service-connected medical conditions or associated with special treatment authorities (38 U.S.C. § 1729). These collections supplement VA’s appropriations and are used to fund VHA medical services to veterans. When VHA provides medical services that are not associated with a veteran’s service-connected conditions or special treatment authorities and the veteran has third-party medical insurance, VHA is authorized by statute (38 U.S.C. § 1729) to pursue insurance reimbursement to the extent available under the veteran’s coverage from the veteran’s third- party insurance. Veterans who owe copayment charges for medical services for non-service-connected conditions or for conditions not related to special treatment authorities must be allowed to benefit from their third- party insurance to satisfy their VHA obligations. Therefore, VHA is required to apply any insurance reimbursement it receives from a veteran’s third-party insurance to the related copayment charge to reduce or eliminate the copayment charge owed by the veteran. VHA billed veterans for over 56.5 million copayment charges totaling over $1 billion in fiscal year 2010. These copayment charges were related to approximately 9.8 percent of the total of approximately 576 million VHA medical services provided. Individual veteran copayment amounts in fiscal year 2010 ranged from a low of $5 for some extended care services to a high of $1,100 for the first 90 days of an inpatient hospital stay. Most billed copayment charges (88 percent) were for prescription medications, for which the copayment charge is generally $8 or $9 for up to a 30-day supply of medication. Table 2 shows the number, amounts, and related percentages associated with the copayment charges billed to veterans in fiscal year 2010 by type of service. Based on our tests of a probability sample of billed copayment charges, we estimate that 96 percent of VHA’s fiscal year 2010 copayment charges were accurate and 4 percent were inaccurate or erroneous. We selected a probability sample of 100 fiscal year 2010 copayment charges billed to veterans, which included only prescription and outpatient services, and found 4 erroneous copayment charges, each of which resulted in an overbilling to a veteran. Based on these test results, we estimate that of VHA’s 56.5 million fiscal year 2010 copayment charges, approximately 54.2 million (96 percent) were accurate and approximately 2.3 million (4 percent) were inaccurate. In addition, none of the four copayment errors we found involved underbilling of veterans. In fiscal year 2010, more than 90 percent of VHA’s medical services did not result in billed copayment charges. To assess the completeness of the billed copayment charge population and the extent of possible underbilling errors associated with those medical services, we also selected a second probability sample of 100 unbilled medical services to assess whether VHA had correctly determined that each of the tested medical services should not have been billed. We did so because incorrect “no bill” determinations by VHA would represent underbilling inaccuracies associated with VHA’s fiscal year 2010 copayment charges. Our tests of 100 unbilled medical services found that VHA correctly determined that each of the medical services should not have resulted in a veteran copayment charge—a 100 percent accuracy rate for this probability sample. As a result, we are 95 percent confident that for fiscal year 2010, VHA’s rate of error in the population of unbilled medical services associated with incorrectly determining that medical services should not have resulted in a copayment charge was between 0 percent and 3 percent. (See table 3.) With respect to erroneous copayment charges and our estimated error rate of 4 percent, we found that each of the four copayment errors occurred in overbilling to a veteran because the veteran was billed for an incorrect amount, the charge should have been reversed or offset, or if paid, the amount should have been refunded to the veteran. Also, three of the four errors we found had not been identified by VHA prior to our selection of the copayment charges for testing. For the fourth error, VHA learned about the error when the veteran notified VHA after receiving a monthly statement containing the wrong copayment charge amount. The four overbilling errors we found resulted from three causes. For one of the errors, the copayment should not have been billed to the veteran because, prior to billing the veteran, VHA had received sufficient third- party insurance reimbursement to offset the copayment and eliminate any amount owed by the veteran. Two of the errors involved copayment charges that were paid by the veterans but were not later refunded by VHA as would have been correct following VBA decisions that resulted in a retroactive change to the veterans’ priority group status. VBA had informed VHA that it had retroactively awarded the veterans either an additional service-connected condition or increased a veteran’s disability rating, which led VHA to change the two veterans’ priority groups. With a (retroactive) change in priority group back to an effective date prior to the medical service that led to the copayment charge we tested, each of the veterans was no longer responsible for the copayment charge they had paid. Once VHA revised the veterans’ priority group status in response to VBA’s retroactive decisions, the veterans were due refunds for the two paid copayment charges we tested. VHA had been aware of VBA’s retroactive award decisions for at least 4 months prior to our identification of the copayments as errors; however, VHA had not determined that the veterans were due refunds for the tested copayment charges. Following our test-related inquiries, VHA officials provided us with documentation that refunds to the veterans had been approved by VHA. The fourth copayment error resulted when VHA incorrectly billed a veteran a copayment amount for a 90-day prescription, instead of the smaller copayment amount that was due for the 30-day prescription supply the veteran received. After the veteran inquired about the erroneous charge, VHA corrected it on the veteran’s subsequent monthly statement. Three types of VHA medical services—inpatient, extended care, and fee basis services—together represented less than 1 percent of VHA’s fiscal year 2010 copayment charge population. Therefore, to provide some limited insight into copayments related to these infrequently billed services, we tested—as case studies—three small probability samples consisting of 10 each for inpatient, extended care, and fee basis services. We found four inaccurate copayment charges—two errors each associated with inpatient and extended care services. Three of the four copayment errors represented overbilling (two extended care services and one inpatient service) and the other represented an underbilling error (one inpatient service). In each case, VHA had not identified the copayment errors we found prior to our selecting the copayment charges for testing. Two of the three overbilling errors we found in our case study tests involved VHA’s incorrect application of the veterans’ third-party insurance reimbursement to offset the veterans’ copayment charges. The third overbilling error occurred when VHA billed the veteran for a second copayment charge in the same day, which generally is not permitted under VHA’s policy. The one underbilling error occurred when VHA incorrectly billed a veteran a lower copayment amount based on the 2009 copayment rate instead of the higher 2010 copayment rate that was applicable at the time medical service was provided. Our case study test results are not generalizable to the larger populations of medical services from which the samples were drawn. However, they may provide some limited insight into copayment errors affecting these infrequently billed types of medical services. Copayment errors identified in both the probability sample and the case study test work mostly involved overbillings, including errors resulting from VHA’s incorrect handling of third-party insurance reimbursements. The case study errors we found do not affect our estimate of VHA’s overall error rate for fiscal year 2010 copayment charges. VHA’s processes for determining copayment charges for many of the copayments we tested resulting from inpatient, extended care, and fee basis services are more complicated and generally require greater VHA staff involvement and review compared with the processes for determining the copayment charges associated with the more routine outpatient and prescription services. This difference in complexity may help explain why we found four copayment errors—two each in two of the three small probability samples we tested in our case studies. In conducting our tests of the accuracy of VHA’s copayment charges and “no bill” decisions, we compared relevant veteran-specific data maintained by VBA and VHA’s HEC and local medical centers to determine whether the VHA data were consistent and correct. The relevant data we compared included each veteran’s recorded service- connected conditions, degree of disability, and priority group status. These data are key to correctly determining whether a medical service should be billed to a veteran as a copayment charge and, if so, the correct amount of the copayment. Of the 200 medical services we tested, we found that the key data for 197 veterans were consistently and correctly recorded by VBA and VHA’s HEC and local medical centers. We found two instances where specific elements of veteran data were not consistently recorded in VHA records and one instance in which the recorded data were incorrect. After following up with VHA on these instances, VHA corrected the data. While these data recording errors did not cause the particular copayment–related charge or “no bill” decision we tested to be inaccurate, they could have affected other VHA copayment–related decisions for these veterans. In one of the two data inconsistencies we found, HEC and the local VHA medical center’s records had the veteran’s combined service-connected condition percentage lower than what VBA had established, which resulted in the veteran being assigned to an incorrect priority group. As a result, if the veteran had been provided certain other medical services, the data inconsistency could have caused the veteran to be incorrectly charged a copayment. VHA officials said that the cause for the incorrect data related to the data transfer from VBA to VHA’s HEC and local medical centers. According to VHA, the data transfer issue and the incorrect data have since been corrected. In the other data inconsistency instance, the disability rating recorded in HEC’s and the medical center’s records were inconsistent, resulting in the medical center having the veteran in an incorrect priority group. According to VHA, the data error was due to problems during registration at the medical center, which have since been resolved. The third data error involved a local medical center’s records having an incorrect priority group for a veteran. The medical center had not received the information needed to update the veteran’s financial assessment (also known as a means test), which was necessary to keep the veteran in a priority group that would have made him exempt from paying certain copayments. After our follow-up inquiries, VHA confirmed that at the time the medical service was provided, the veteran’s recorded priority group was incorrect, and the center has since received the information necessary to update the financial assessment, and the veteran’s recorded priority group is now correct. While various activities performed by VHA staff involve examining or reviewing the accuracy of some individual veteran copayment charges, we found that those activities do not provide VHA with systematic VHA- wide information on the accuracy of copayment charges needed to effectively monitor—over time—the rates of and causes for copayment errors. We also found that VHA has not established a performance measure or goal for the level of accuracy it wants to achieve for the copayment charges it bills to veterans. As a result, it was not clear how the copayment charge error rates we observed in our probability samples would compare to rates of error VHA would consider acceptable or whether corrective action needs to be taken to reduce the error rates to lower levels. In addition, without procedures to periodically assess the accuracy and completeness of its copayment charges, VHA does not have the information needed to determine whether changes in its accuracy rates are occurring over time. In reviewing VHA’s copayment billing process and the extent to which VHA systematically monitors its copayment charges for accuracy, we identified various activities that generally involved reviewing or checking the accuracy of some individual copayment charges; however, those activities are performed for reasons other than a systematic VHA-wide assessment of the accuracy of billed copayment charges and do not provide sufficient information for systemwide monitoring.  Responding to veteran inquiries. VHA responds to veteran-related questions or inquiries concerning specific copayment charges. In doing so, VHA may evaluate some individual copayment charges and determine whether they were accurate. However, VHA does not systematically track and analyze the results of these individual reviews, including whether the copayment charges were accurate or inaccurate and, if applicable, the cause of any inaccuracies.  Revenue reviews. Staff from VA’s Management Quality Assurance Service’s (MQAS) Health Care Financial Assurance Division may evaluate specific veteran copayment bills on a limited, ad hoc basis as part of the recurring reviews of VA revenue activities at selected individual medical centers. During these reviews, MQAS officials said they devote most of their resources to evaluating third-party insurance collections, as they make up the majority of the Medical Care Collections Fund (MCCF). These revenue reviews are focused on third-party insurance recoveries and in only some instances may involve reviewing the accuracy of individual veteran copayment charges.  Local compliance programs. Individual medical centers and Consolidated Patient Account Centers (CPAC) have decentralized compliance programs that include varied processes and procedures related to reviewing some individual copayment charges. The scope and results of these compliance reviews may involve reviewing copayment–related charges but do not routinely include a systematic assessment of a probability sample of copayment charge accuracy. In addition, the results of any reviews of copayment charge accuracy at medical centers and CPAC locations are not consolidated and reported to VHA management.  Targeted reviews of certain copayment charges. VHA instituted a policy in October 2006, in response to a VA Inspector General report, requiring VHA’s Compliance and Business Integrity (CBI) Office to identify delinquent copayment debts for certain veterans whose accounts were being referred to debt collection. VHA facilities were required to review the accounts to help ensure that the referrals were not based on inaccurate copayment charges. Initially, the policy required the VHA facilities to report to the CBI Office the results of their targeted reviews until the error rate in the applicable copayment charges went below 10 percent for two consecutive quarters. As a result of a sustained decrease in the related billing error rate, in October 2009, the CBI Office stopped collecting national monitoring results from VHA facilities, and in March 2010, VHA rescinded the requirements for facilities to report the results of their quarterly reviews to the CBI Office. However, VHA facilities are still responsible for conducting the reviews. As noted, these activities are conducted for specific reasons and are not intended to provide VHA with systematic VHA-wide information on the accuracy and completeness of copayment charges needed to effectively monitor—over time—the rates of and causes for copayment errors. Also, having meaningful performance information to provide to stakeholders, including veterans organizations and Congress, could be useful in cases where questions regarding the accuracy and completeness of copayment charges are raised. Our tests of a probability sample of VHA copayment charges found copayment errors which we estimate to be 4 percent or approximately 2.3 million of VHA’s 56.5 million fiscal year 2010 copayment charges. However, because VHA does not have established acceptable or tolerable error rates for copayment charges, the extent to which the error rates we observed would compare to levels of performance that VHA would consider acceptable is unclear. We believe that it is important for VHA to establish a performance measure for the copayment accuracy rate it wants to achieve in billing copayment charges to veterans and, once it is established, to periodically assess—on a systematic basis—the accuracy and completeness of its copayment charges. With such information, VHA would be able to make informed decisions concerning the rates and causes of erroneous copayment charges, including whether any actions are needed to lower its overall error rate. Such periodic assessments could be integrated into VHA’s existing quality assurance monitoring efforts and provide meaningful management information on various aspects of its copayment billing systems and processes, including whether key veteran data were consistently and correctly recorded in VHA records and systems. Further, having meaningful performance information regarding copayment accuracy to provide to stakeholders, including veterans organizations and Congress, could assist VA in responding to any questions concerning the accuracy and completeness of copayment charges. To provide VHA with the information needed to adequately monitor the accuracy of copayment charges VHA-wide and to assess and respond to the causes of copayment errors, the Secretary of Veterans Affairs should direct VHA to take the following two actions:  establish an accuracy performance measure or goal for copayment charges billed to veterans and  establish and implement a formal process for periodically assessing— VHA-wide—the accuracy of veteran copayment charges and taking corrective actions as necessary. In its written comments, VA generally agreed with our conclusions and agreed with our recommendations. It also provided an overview of planned actions, starting in fiscal year 2012, including plans to establish an initial national performance measure for copayment charge accuracy and implement a periodic assessment of billed copayment accuracy. As VA implements these plans, it will be important for these actions to provide the information needed to monitor VHA-wide copayment accuracy and completeness and to assess and respond to the causes of copayment errors. Such plans, if fully and effectively implemented in accord with our conclusions and recommendations, should respond to the conditions we found. We also incorporated VA’s technical comments where appropriate. VA’s comments are reprinted in appendix III. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. The report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9095 or raglands@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. Pursuant to a request from members of the Subcommittee on Health, House Committee on Veterans’ Affairs, we reviewed the Veterans Health Administration’s (VHA) copayment billing practices to determine (1) the accuracy rate for VHA copayment charges, including causes for any under- and overbilling errors, and (2) whether VHA had systems and processes in place to adequately monitor the accuracy of copayment charges billed to veterans. To determine the accuracy rate and any causes for under- and overbilling errors, we used as our criteria applicable law and VHA policy. To gain an understanding of VHA’s policies, procedures, systems, and processes related to copayment billing practices, we performed walk-throughs of applicable processes with appropriate VHA staff at a medical center. We reviewed and discussed with agency officials and staff applicable processes related to VHA’s copayment billing practices. We also interviewed Veterans Benefits Administration (VBA) officials and staff about VBA decisions related to veterans’ service-connected conditions and disability ratings and the transfer of that information to VHA. In addition, to assess the reliability of data and information used in this report, we reviewed Department of Veterans Affairs’ (VA) procedures for ensuring the reliability of data and information generated by key VHA systems used in the copayment billing process, including VHA’s Veterans Health Information Systems and Technology Architecture (VistA), Performance and Operations Web-Enabled Reports (POWER), and Prescription Benefits Management systems. We determined that the data and information generated from key VHA systems used in the copayment billing process were sufficiently reliable for the purposes of our testing. To determine the accuracy of the copayment amount billed to veterans, we selected a simple probability sample of 100 copayment charges from the population of approximately 56.5 million fiscal year 2010 copayment charges in POWER. This sample was designed to estimate the error rates in the population, if errors were found in the sample, or to conclude with 95 percent confidence that the population error rate is less than 3 percent, if no errors were found in the sample. The population consisted of five broad types: (1) prescription, (2) outpatient, (3) inpatient, (4) extended care, and (5) fee basis (see table 4). To assess the reliability of population data used to select the sample, we (1) reviewed related documentation, (2) reviewed internal and external reports related to the systems, and (3) interviewed knowledgeable VHA officials. We also, as part of our testing of unbilled medical services, determined that for the purposes of our testing, the population of fiscal year 2010 copayment charges was materially complete. Based on our data reliability analysis, we determined that the population data, obtained from POWER, were sufficiently reliable for the purposes of our testing. For each sampled item, we obtained applicable information and supporting documentation from VHA and VBA and determined whether a veteran’s copayment charge was accurate in accordance with VHA’s established policies, procedures, systems, and guidance. For each inaccurate copayment charge, we determined the cause and provided VHA with an explanation of the error, the related cause, and any other relevant information. Table 5 contains a detailed breakout of the causes of the errors in copayment charges. There were limitations because of the nature of the testing we performed. We did not test the medical determinations (i.e., diagnosis and whether the service was related to a veteran’s service-connected conditions or special treatment authority) of the medical service provider (including the pharmacist, doctor, nurse, or other medical staff); test the determinations made as a result of the adjudication process at VBA to determine the veteran’s service-connected conditions and related disability rating percentages; test the determination made by VHA on whether to bill a third-party insurer for the medical service or the third-party insurer’s determination to pay, including the amount of that payment; and confirm through outside sources (including contacting applicable veterans) the accuracy or completeness of veteran-specific information relied on by VHA as part of its decision to bill tested copayment charges. In table 6, we present our statistical results as (1) our projection of the estimated error overall and (2) the 95 percent, two-sided confidence intervals for the projections. To (1) assess the completeness of the population of fiscal year 2010 copayment charges billed to veterans and (2) determine the accuracy of VHA’s decisions not to bill veterans copayments for medical services provided in fiscal year 2010, we selected for review a probability sample of 100 unbilled medical services from the population of VHA’s approximately 576 million fiscal year 2010 medical services. Our sampling frame for this sample was developed by combining databases from three VHA data warehouses (the National Patient Care database, Purchased Care Data warehouse, and Pharmacy Data warehouse), which totaled approximately 576 million medical services provided in fiscal year 2010. VHA’s databases do not separately identify or track unbilled services, so this set of databases contained both billed and unbilled fiscal year 2010 medical services. The population of medical services consisted of five broad types: (1) prescription, (2) outpatient, (3) inpatient, (4) extended care, and (5) fee basis (see table 7). Because the VHA-provided population of all medical services from which we selected our sample included services that resulted in copayment charges, we initially selected a larger probability sample of 150 medical services. After checking billing records, we excluded any sampled medical services that resulted in a copayment charge. From the remaining medical services, we selected the first 100 as our probability sample of unbilled medical services. This sample was designed to test for a 3 percent tolerable error rate so that if we found no billing errors in the sample, we would be able to conclude with 95 percent confidence that (1) the population of fiscal year 2010 unbilled medical services did not include a material number (more than 3 percent) of medical services that should have been billed as copayment charges and (2) the population of billed fiscal year 2010 copayment charges was materially complete for the purposes of our tests. If errors were found, this sample could be used to estimate the rate of copayment underbilling errors associated with incorrect VHA determinations not to bill medical services in this population. To assess the reliability of population data used to select this sample for testing, we (1) reviewed related documentation, (2) reviewed any internal or external reports related to the systems, and (3) interviewed knowledgeable VHA officials. Based on our data reliability analysis, we determined that the population data were sufficiently reliable for the purposes of our testing. For each of the unbilled medical services we tested, we obtained applicable information and supporting documentation from VHA and VBA to determine whether VHA correctly determined that the 100 tested fiscal year 2010 medical services should not have resulted in copayment charges, in accordance with VHA’s established policies, procedures, systems, and guidance. There were limitations because of the nature of the testing we performed. We did not test the medical determinations (i.e., diagnosis and whether the service was related to a veteran’s service-connected conditions or special treatment authority) of the medical service provider (including the pharmacist, doctor, nurse, or other medical staff); test the determinations made as a result of the adjudication process at VBA to determine the veteran’s service-connected conditions and related disability rating percentages; test VHA’s ability to record all of the medical services in the medical center–level VistA system, or VHA’s ability to transfer all the medical services to the appropriate data warehouse; and confirm through outside sources (including contacting applicable veterans) the accuracy or completeness of veteran-specific information relied on by VHA as part of its decision to not bill for tested medical services. In table 8, we present our statistical results as (1) our projection of the estimated error overall and (2) the 95 percent, two-sided confidence intervals for the projections. In addition to our statistical samples of copayment charges and unbilled medical services, we tested—as case studies—three small, nongeneralizable samples consisting of 10 copayment charges each from inpatient, extended care, and fee basis services. These three types of medical services combined represented less than 1 percent of the VHA- wide fiscal year 2010 copayment population. Results from our nongeneralizable case study samples cannot be used to make inferences about any population; consequently, results obtained from these cases are specific to the particular cases selected. We conducted this testing to provide limited insight into possible errors in copayments billed for these types of medical services. For each case study, we obtained applicable information and supporting documentation from VHA and VBA and determined whether a veteran’s copayment charge was accurate in accordance with VHA’s established policies, procedures, systems, and guidance. For each inaccurate case study copayment charge, we determined the cause and provided VHA with an explanation of the error, the related cause, and any other relevant information. Table 9 contains a breakout of the results of the testing of the case studies. To determine whether VHA had systems and processes in place to adequately monitor the accuracy of copayment charges, we identified relevant policies, procedures, systems, practices, and related documentation, whether at a national, regional, or local level, related to VHA’s efforts to monitor copayment accuracy. We reviewed the documentation provided to determine whether it contributed to VHA periodically assessing the accuracy of copayment charges and taking appropriate action to address the underlying causes when errors or inaccuracies are found. We also interviewed knowledgeable staff and officials from VHA and VA’s Office of Inspector General. We conducted this performance audit from February 2010 through August 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As part of our review of the accuracy of Veterans Health Administration (VHA) copayment charges, we gained an understanding of key aspects of veteran eligibility and enrollment for veteran health services, veteran medical services and related copayment charges, and copayment billing and adjustments. Veterans are eligible for Department of Veterans Affairs (VA) health care benefits based on their period of and separation from military service under any condition other than a dishonorable separation. To obtain VHA medical services, most veterans must take action to enroll in the VHA health care system. To initiate their enrollment, veterans submit a completed enrollment application (VA Form 10-10EZ) either by mail or online to VHA’s Health Eligibility Center (HEC) in Atlanta for review and processing, or veterans may visit a local VHA medical center or facility where they can receive assistance in completing the enrollment form. HEC establishes a veteran’s enrollment status (priority group), which is primarily affected by decisions made by the Veterans Benefits Administration (VBA), which establishes and administers a variety of nonhealth benefits and services for veterans. VBA is responsible for determining a veteran’s service-connected conditions and, to the extent applicable, a veteran’s disability rating. In doing so, VBA adjudicates veteran claims by determining whether a veteran’s illness or injury was incurred in or aggravated by the veteran’s military service (i.e., a service- connected condition). Once awarded to a veteran, a service-connected condition is considered a “rated” service-connected condition. Additional service-connected conditions may also result in a change to a veteran’s disability rating. VBA sends the veteran a notification letter informing him or her of the award decision, including the additional service-connected condition, applicable changes in disability rating, and the effective date of the award determination. Information on VBA award decisions is also automatically transmitted to HEC. VBA may grant additional service- connected conditions and change disability ratings retroactively by establishing an effective date that precedes the date VBA makes the determination. Based on a veteran’s service, including rated service-connected conditions, applicable disability rating, special treatment authorities, and other enrollment information such as the results of a financial assessment (called a means test), HEC assigns veterans to one of eight enrollment priority groups. Special treatment authorities include care provided pursuant to 38 U.S.C. § 1710(e), and implementing regulations at 38 C.F.R. §§ 17.36 (a)(3) and 17.36 (b)(6), which authorizes treatment for disorders that may be associated with a Vietnam-era veteran’s exposure to herbicide (including Agent Orange); certain diseases deemed to be related to exposure to radiation; disorders that may be related to service in the Southwest Asia theater of operation during the Persian Gulf War; illnesses that may be related to services in a qualifying combat theater; and disorders that may be related to participation in certain biological and chemical warfare testing, including Project SHAD (Shipboard Hazard and Defense Project). Veterans covered by section 1710(e) are enrolled in priority group 6. Generally speaking, the more service-connected conditions, higher disability rating, and special treatment authorities that apply to a veteran, the less likely a veteran will be subject to copayment charges. Table 10 shows the eight priority groups and their eligibility factors. Medical services provided by VHA include inpatient and outpatient services, prescription medication, and extended services. When VHA facilities are not capable of furnishing economical hospital care or medical services because of geographic inaccessibility or are not capable of furnishing the care or services required, VHA may authorize and pay a non-VHA provider to provide certain veterans hospital care and medical services. When authorized, VHA identifies these as fee basis services. VHA’s clinical and health records system—the Computerized Patient Record System—contains, among other things, information on veterans’ rated service-connected conditions and special treatment authorities. When a veteran receives medical services, the provider indicates in the system whether the service provided was related to a veteran’s service- connected conditions or special authorities, which affects whether a copayment will be charged to the veteran. According to VHA, almost 95 percent of the approximately 576 million medical services provided to veterans in fiscal year 2010 consisted of outpatient services (70.8 percent) and prescription services (23.6 percent). (See table 11.) Outpatient services. There are three copayment tiers or categories that apply to outpatient services–no copayment, basic $15 copayment, and specialty $50 copayment. For example, an outpatient visit for immunizations or preventive screenings is included in the no copayment tier. A basic (nonspecialty) outpatient service, which includes primary care visits for diagnosis and management of acute and chronic conditions, has a $15 copayment. A specialty outpatient service, which requires a referral, includes cardiology services and radiology services, such as magnetic resonance imagery, has a $50 copayment. If the medical service, which might otherwise have an applicable copayment, is determined to be related to a veteran’s service-connected condition or special treatment authority, then no copayment charge would be due. Generally, only veterans in priority groups 7 and 8 are charged for applicable outpatient copayments. Further, when a veteran in priority group 7 or 8 receives more than one outpatient service in a single day, only one copayment—the highest applicable amount—is to be charged to the veteran for that day. Prescription services. Veterans can fill prescriptions for medications at a VHA pharmacy or through the mail. Veterans whose prescriptions require a copayment are charged either $8 (for veterans in priority groups 2 through 6) or $9 (for priority groups 7 and 8) for supplies of 30 days or less. If authorized, prescriptions may be filled for up to a 90-day period at a time with a corresponding copayment charge based on a longer number of days. Priority group 1 veterans do not pay any prescription copayment charges. Veterans in priority groups 2 through 6 are subject to applicable copayment charges but have an annual cap that limits their total prescription copayment charges to $960 per year. Priority group 7 and 8 veterans are generally subject to applicable prescription copayments but do not have an annual cap. Inpatient services. Inpatient stay copayment charges are $1,100 for up to the first 90 days of care during a 365-day period and $550 for each additional 90 days. In addition to the inpatient stay copayment charges, patients are also subject to inpatient per diem charges of $10 per day. As with other medical services, no inpatient copayment or per diem will be charged if the stay is related to the veteran’s service-connected conditions or special treatment authority. Generally, only veterans in priority groups 7 and 8 are charged applicable inpatient copayment and per diem charges. Extended care services. Extended care services generally include both institutional (inpatient) and noninstitutional (outpatient) services. VHA does not charge any copayments for the first 21 days of extended care services in any 12-month period. Extended care copayment charges are capped at a maximum of $97 per day for institutional nursing home or institutional respite care, $5 per day for institutional domiciliary care, and $15 per day for noninstitutional adult day health care and noninstitutional respite care services. No extended care copayments will be charged if the services are related to a veteran’s service-connected conditions or special treatment authorities. Generally, only veterans in priority groups 4 through 8 may be subject to extended care copayment charges. Fee basis care services. VHA may authorize certain veterans to receive hospital care and medical services from non-VHA providers. When this occurs, VHA refers to these services as fee basis care. Non-VHA providers submit bills to VHA for medical services provided to veterans. Copayment amounts and requirements related to fee basis services are otherwise the same as those for services provided in VHA facilities. Determining the correct applicable copayment charge depends on many factors, including the underlying medical service provided, a veteran’s applicable service-connected conditions and special treatment authorities, priority group, and established copayment amount. Table 12 provides general information on whether copayment charges may apply to veterans in particular priority groups. While table 12 reflects the general applicability of copayment charges by priority group, some exceptions apply, including the following:  Former prisoners of war, who make up part of priority group 3, are not subject to any prescription copayment charges.  Copayment requirements do not apply to priority group 6 veterans if medical service is related to the priority group 6 placement.  For priority group 7 veterans, the inpatient stay copayment rate ($1,100) is reduced by 80 percent.  Veterans may be exempted from copayments based on results of the financial assessment.  Veterans who experience temporary financial difficulties may apply to their local VHA facility for hardship waivers to eliminate copayments for a defined short-term period or to have VHA waive a specified amount of outstanding debt incurred for prior medical services. Generally, when a copayment charge is applicable to a medical service, the billing system determines whether that medical service should result in a copayment amount being charged to a veteran’s account based on information recorded by the service provider and the veteran’s specific enrollment information, including priority group status. The billing system also tracks all prescription copayment charges billed to a veteran at all medical center sites to ensure that the annual maximum prescription billing cap is not exceeded. For fee basis care, staff at the local VHA facilities who process claims submitted by non-VHA providers for reimbursement for the cost of medical services provided to veterans outside of VHA medical centers also manually establish a veteran copayment charge in the billing system if the medical service in question would have resulted in a copayment charge had the service been provided in a VHA facility. If a veteran has active third-party health insurance, VHA’s policy is to file a claim with the veteran’s third-party insurer seeking reimbursement of costs related to medical services covered by the veteran’s third-party insurance that were not related to a veteran’s service-connected conditions or special treatment authorities. VHA is authorized to pursue reimbursement from third-party insurers regardless of whether the services were provided by VHA or non-VHA providers. Under this policy, VHA is required to apply any related insurance reimbursement received to reduce or eliminate any related pending copayment charges due from the veteran. As a result, if a veteran has third-party insurance and is subject to a copayment charge, the copayment charge is not billed to the veteran on the monthly statement for up to 90 days to allow time for VHA to receive and apply reimbursement from the veteran’s third-party insurer. If the reimbursement received does not fully cover or offset the veteran’s copayment obligation, the veteran is responsible for any balance. Unless reimbursement received from a veteran’s third-party health insurer is applied to eliminate or reduce the pending copayment charge, the original copayment charge is released after 90 days, and the charge appears on the veteran’s subsequent monthly billing statement. Applicable third-party insurance reimbursement received after the copayment charge is billed to the veteran should still be applied to reduce or eliminate a copayment charge if still unpaid, or used to provide a refund of the billed amount if the veteran has paid the amount. According to VHA procedures, this process, which is known as the third-party insurance offset, is manual and is to be performed on a daily basis after third-party insurance reimbursement is received by local facility staff. VHA is expected to adjust copayment charges or issue copayment refunds when certain matters related to the billed amount change. When a third-party insurance reimbursement that would fully offset or reduce a billed copayment charge is received, VHA is expected to eliminate or reduce the amount billed to the veteran’s account, and if the amount was previously paid by the veteran, VHA is responsible for initiating a refund to the veteran. In addition, when VBA notifies VHA of a new retroactively awarded service-connected condition or an increased disability rating for a veteran, VHA staff are to review the veteran’s account to determine whether any previously billed copayment charges for services provided after the effective date of the retroactive VBA award determination should be canceled (if unpaid) or refunded (if paid). In addition to the contact named above, John J. Reilly, Assistant Director; Wilfred Holloway, Assistant Director; Mark Ramage, Assistant Director; Sophie Brown; James Healy; Diane Morris; Quang Nguyen; Gabrielle Perret; Sabrina Rivera; and Matthew Zaun made key contributions to this report.
In fiscal year 2010, the Department of Veterans Affairs' (VA) Veterans Health Administration (VHA) billed veterans millions of medical copayment charges totaling more than $1 billon. Witnesses at a 2009 Subcommittee on Health, House Committee on Veterans' Affairs, hearing raised concerns about inappropriate copayment charges, including some associated with veterans' service-connected conditions. As a result, members of the Subcommittee asked GAO to review (1) VHA copayment charge accuracy, including error rates and related causes, and (2) VHA efforts to monitor copayment charge accuracy. To assess the accuracy of VHA's billed copayment charges, GAO evaluated samples of fiscal year 2010 billed and unbilled medical services to determine copayment error rates and related causes. GAO also reviewed VHA practices related to monitoring the accuracy of copayment charges.. Of the more than 56 million fiscal year 2010 veteran copayment charges billed by VHA, GAO estimates, based on its test of a probability sample of copayment charges, that 96 percent (or approximately 54.2 million) of the copayment charges were accurate and 4 percent (or approximately 2.3 million) were inaccurate. GAO's tests of a separate probability sample of the approximately 519 million VHA medical services that did not result in copayment charges showed that each of those VHA determinations was accurate. These and other estimated percentages are based on test results of probability samples and are subject to sampling error. Appendix I of this report contains additional information on the samples and the 95 percent confidence intervals for the estimates contained in this report. (1) Since the errors identified in GAO's probability sample all involved copayment overbilling, GAO estimates that 4 percent of the copayment charges involved overbilling of veterans. The errors GAO found were due to various factors, including inadequate review of previously billed copayment charges following retroactive changes in a veteran's service-connected conditions and the incorrect application of related medical reimbursements received from veterans' third-party insurance. (2) In tests GAO performed on another probability sample to identify underbilling errors in the approximately 519 million medical services that did not result in copayment charges, GAO found that VHA correctly determined that each tested service should not have resulted in a copayment charge. As a result, GAO tests showed that VHA accurately did not bill copayment charges for these services, which made up more than 90 percent of the approximately 576 million medical services provided during fiscal year 2010. While VHA performed various activities that involved reviewing the accuracy of some individual billed copayment charges, these activities do not constitute a systematic process for providing VHA-wide information on the accuracy and completeness of its copayment charges over time. In addition, GAO found that VHA had not established a performance measure for the accuracy level it wants to achieve in billing copayment charges. Without such a measure, it is not clear how the error rates GAO found would compare to error rates that VHA would consider acceptable, or if VHA would determine whether corrective actions need to be taken to reduce the error rates to lower levels. In addition, without a performance measure and periodic, systemwide information on the accuracy of its copayment charges, VHA cannot monitor changes in error rates and related causes over time. VHA also does not have meaningful performance information that it can provide to interested stakeholders when questions or concerns are raised concerning the accuracy of VHA's copayment charges billed to veterans. GAO makes two recommendations to the Secretary of Veterans Affairs to (1) establish a copayment accuracy performance measure and (2) establish and implement a formal process for periodically assessing the accuracy of veteran copayment charges VHA-wide. In written comments on a draft of this report, VA agreed with GAO's recommendations.
U.S. copyright law protects books, photographs, videos, movies, sound recordings, software code, and other creative works of expression from unauthorized copying. A copyright gives its owner the exclusive right to reproduce, distribute, perform, display, or license a work, and the exclusive right to produce or license the production of derivative works. Copyright protection attaches as soon as the work is “fixed in a tangible medium of expression,” thus covering both published and unpublished works. However, there are some limits to the protections afforded by copyright law, such as in the use of a copyrighted work for purposes such as criticism, comment, news reporting, teaching, scholarship, or research. File-sharing software applications work by making selected files on a user’s computer available for downloading by anyone using similar software, which, in turn, gives the user access to selected files on computers of other users on the peer-to-peer network. The growing popularity and proliferation of file-sharing applications such as KaZaA has had a profound effect on the dissemination of copyrighted works, by both the copyright holder and infringers. The use of file sharing has grown steadily over the past few years. For example, by May 2003, KaZaA had become the world’s most downloaded software program of any kind, with more than 230 million downloads. According to the Recording Industry Association of America, the increased use of peer-to-peer networks has contributed to an increase in copyright infringement, with millions of users downloading more than 2.6 billion copyrighted files (mostly sound recordings) each month via various peer-to-peer networks. The widespread unauthorized distribution of copyrighted material on peer- to-peer systems is a concern not only for copyright owners but also for those who administer the networks on which the file-sharing applications run. Because of their high-bandwidth connections and the concentration of large groups of young, computer-literate users, college and university networks are particularly vulnerable to adverse impacts from the use of file-sharing applications. In 2002, a committee of representatives from education and the entertainment industry—the Joint Committee of Higher Education and Entertainment Communities—was convened to discuss and address matters of mutual concern, including the misuse of university networks for copyright infringement. In addition, the Recording Industry Association of America has conducted searches for copyrighted material being illegally shared on peer-to-peer networks and has sent more than 30,000 notices to colleges and universities regarding files that are being shared on systems connected to university networks. Congress has moved to address piracy issues that have been raised by developments in computer and Internet technology. With regard to the widespread unauthorized distribution of copyrighted material on peer-to- peer systems, the crime of felony copyright infringement has four essential elements: 1. A copyright exists; 2. The copyright was infringed by the defendant, specifically by reproduction or distribution of the copyrighted work, including by electronic means; 3. The defendant acted “willfully.” Under the law, evidence of reproduction or distribution of a copyrighted work, by itself, is not sufficient to establish willful infringement; and 4. The defendant infringed at least 10 copies of one or more copyrighted works with a total retail value of more than $2,500 within a 180-day period. In addition to criminal liability, significant civil remedies are available to copyright holders for infringement. Copyright holders are entitled to receive either “actual damages and profits” from an infringer, or they can elect to receive “statutory damages” ranging from $750 to $30,000 for each infringed work, increasing to $150,000 if the copyright holder proves the infringement was willful. In addition, a court can order an injunction against further infringement, the impoundment and disposition of infringing articles, and attorneys’ fees and costs. Several federal entities are responsible for enforcing the federal statutes pertaining to intellectual property protection and copyright infringement. Table 1 shows these agencies, along with other key organizations involved in efforts to protect intellectual property rights and combat copyright infringement, including illegal file sharing on peer-to-peer networks. The federal law enforcement agencies work with state and local law enforcement agencies, including state police and local district attorneys, in the investigation and prosecution of intellectual property crime. In addition, industry organizations, such as the Recording Industry Association of America, the Business Software Alliance, and the Software and Information Industry Association, provide federal law enforcement organizations with information and documentary evidence in support of federal investigations and prosecutions. (See app. III for a detailed description of federal organizations involved in investigating and prosecuting copyright infringement.) The college and university officials we interviewed are aware of the use of file-sharing applications on their networks, almost all of them have experienced some problems and increased costs as a result of the use of these applications, and they are taking steps to reduce the use of peer-to- peer file-sharing technology on their networks. All of the college and university officials we interviewed stated that they have implemented technical controls to limit the use of file-sharing technology on their networks and that they have either undertaken or plan to undertake educational and enforcement efforts to limit student copyright infringement. Most of the officials interviewed stated that they felt they had the right tools and knowledge to deal with the use of peer-to- peer file-sharing applications to download or share copyrighted material on university networks, and almost all of the officials stated that they thought the approaches they have used to address the problem have been either somewhat or very successful at controlling the problem. All of the university officials we interviewed indicated that their colleges or universities routinely monitor their networks and most of them indicated that the institutions also actively monitored their networks specifically for the use of peer-to-peer file-sharing applications during the 2003 to 2004 academic term. For those colleges and universities that monitored specifically for the use of file-sharing technology (10 of 13 respondents), university officials stated that the amount of bandwidth that appeared to be used by file-sharing applications varied, from as low as 0 to 9 percent to as high as 90 to 100 percent. (See fig. 1.) While several university officials were unable to estimate the percentage of students using file-sharing applications to download or share music, images and video files, several estimated that 30 percent or more of students were doing so during the 2003 to 2004 academic term. One official estimated that between 90 and 100 percent of the students at the institution were using file-sharing applications. In addition, all of the college and university officials interviewed indicated that they had received notices from representatives of copyright holders alleging file-sharing copyright violations by students, with more than half of the interview respondents indicating that they had received more than 100 notifications. In most or all of these cases, university officials were able to trace the infringement notification to an individual student. (See fig. 3.) Overall, most of the college and university officials we interviewed indicated that they had experienced some network performance or security problems as a result of the use of peer-to-peer file-sharing applications on their institutions’ networks. Specifically, two officials interviewed stated that their institution had experienced network performance problems somewhat often as a result of student use of file- sharing applications, and six officials indicated that they had experienced few network performance problems. Further, of the 13 institutions whose officials we interviewed, 9 indicated that they had experienced security problems as a result of file sharing or downloading. For those who indicated that they had experience problems, the most common types of security incidents reported were the introduction of viruses or malicious code (eight interview respondents) and temporary loss of network resources (five interview respondents). In addition, almost all of the officials that were interviewed stated that their institutions had spent additional funding during the 2003 to 2004 academic year to deal with the effects of the use of peer-to-peer file- sharing applications on their networks, with the median amount of additional spending being between $50,000 and $99,999; two officials stated that their institutions had spent between $250,000 to $749,999. This additional funding was spent on a variety of network infrastructure and operational areas, including bandwidth expansion, bandwidth management software/hardware, system management, and system maintenance. (See fig. 3.) All of the colleges and universities whose officials we interviewed indicated that they are taking steps to reduce or eliminate the use of peer- to-peer file-sharing technology for copyright infringement on their networks. Specifically, all of the officials interviewed stated that they have implemented technical controls to limit the use of file-sharing technology. These technical controls include (1) limiting access to file-sharing applications, both among internal users of the network and between internal and external users; (2) reducing or limiting the amount of bandwidth available to network users seeking to download or share files; and (3) segregating the portion of the network serving college or university administered housing from the rest of the university network. In addition, all of the officials interviewed stated that they have either undertaken or plan to undertake educational and enforcement efforts to limit student copyright infringement. All of the officials that were interviewed stated that they have undertaken educational efforts, such as issuing or revising network use policies and student codes of conduct; and 12 of the 13 officials that were interviewed stated that they plan to undertake educational activities regarding intellectual property violations or illegal file sharing of copyrighted materials. (See fig. 4.) Further, all the officials interviewed stated that they have undertaken enforcement efforts to address copyright infringement on peer-to-peer networks. During the 2002 to 2003 academic year, all of the college and university officials interviewed stated that they had either discovered or had been made aware of individuals using file-sharing applications such as KaZaA or peer-to-peer network indexes on their institution’s network. When file downloading was discovered, all the officials stated that enforcement actions were taken against the individuals responsible. These actions included issuing a warning to the user or users, banning them from the network for a period of time, and shaping the bandwidth available for a group of users. (See fig. 5.) Most of the officials interviewed stated that they felt they had the right tools and knowledge to deal with the use of peer-to-peer file-sharing applications to download or share copyrighted material. Further, almost all of the officials stated that they thought the approaches they have used to address the problem have been either somewhat or very successful at controlling the use of peer-to-peer applications for downloading and sharing copyrighted materials. Federal law enforcement officials told us that they have been taking actions to investigate and prosecute organizations involved in significant copyright infringement, such as the warez groups—loosely affiliated networks of criminal groups that specialize in “cracking” the copyright protection on software, movies, game and music files. These groups use a wide range of Internet technologies—including file sharing over peer-to- peer networks—to illegally distribute copyrighted materials over the Internet. According to the Deputy Chief for Intellectual Property Computer Crime and Intellectual Property Section, Justice, the top warez groups serve as major suppliers of the infringed works that eventually enter the stream of file sharing on peer-to-peer networks. Two recent examples of major federal law enforcement actions that have focused on international piracy groups are the Justice’s Operations Fastlink and the U.S. Customs Service’s Operation Buccaneer. Operation Fastlink is an international investigation coordinated by Justice’s Computer Crime and Intellectual Property Section and the FBI. According to the Deputy Chief for Intellectual Property Computer Crime and Intellectual Property Section, Fastlink is the largest international enforcement effort ever undertaken against online piracy. As part of Operation Fastlink, on April 21, 2004, U.S. and foreign law enforcement officials executed more than 120 simultaneous searches across multiple time zones. In addition to the United States, searches were executed in Belgium, Denmark, France, Germany, Hungary, Israel, the Netherlands, Singapore, Sweden, Great Britain, and Northern Ireland. As a result, more than 100 individuals believed to be engaged in online piracy have been identified, many of them high-level members or leaders of online piracy release groups that specialize in distributing high-quality pirated movies, music, games, and software over the Internet. More than 200 computers were seized worldwide, including more than 30 computer servers that function as storage and distribution hubs for the online piracy groups targeted by this operation. Operation Buccaneer was an international investigation and prosecution operation led by the U.S. Customs Service and Justice. The operation resulted in the seizure of tens of thousands of pirated copies of software, music, and computer games worth millions of dollars and led to 30 convictions worldwide. Operation Buccaneer targeted a number of highly organized and sophisticated international criminal piracy groups that had cracked the copyright protection on thousands of software, movie, and music files and distributed those files over the Internet. As part of Operation Buccaneer, on December 11, 2001, the U.S. Customs Service and law enforcement officials from Australia, Finland, Norway, Sweden, and the United Kingdom simultaneously executed approximately 70 search warrants worldwide. Approximately 40 search warrants were executed in 27 cities across the United States, including several at universities. Pursuant to the search warrants, law enforcement seized 10 computer “archive sites” that contained tens of thousands of pirated copies of software, movies, music, and computer games worth millions of dollars. According to the Deputy Chief for Intellectual Property Computer Crime and Intellectual Property Section, as of April 1, 2004, 27 defendants had been convicted in the United States, with 2 awaiting sentencing and 1 other under indictment. Internationally, six defendants have been convicted in Finland and the United Kingdom, with four additional defendants scheduled to go to trial in the United Kingdom in the fall of 2004. According to DHS officials, the Cyber Crime Center of the U.S. Immigration and Customs Enforcement does target individual violators who are involved in cyber intellectual property piracy on a profit or commercial basis. The officials noted that the center does not pursue investigations of individual peer-to-peer file violators due to the statutory dollar-value threshold limits and lack of a profit motive. According to these officials, the statutory dollar-value threshold is very difficult to meet in peer-to-peer cases, since most peer-to-peer infringement is based on the sharing of music, and the major record labels have set $0.80 as the dollar value of each copy of a song (the officials noted that most successful prosecutions are based on copyright infringement of software applications, because these tend to have a higher dollar value than songs). Proving criminal intent is also often a problem in these cases, since file sharing is a passive act, and in most cases there is no profit motive. According to Justice officials, federal intellectual property protection efforts do not focus on investigation and prosecution of individual copyright infringers on peer-to-peer networks, but instead they focus on organizations or individuals engaged in massive distribution or reproduction of copyrighted materials. According to these officials, this focus exists because: Federal law enforcement is best suited to focus on large-scale or sophisticated infringers, including organized groups, large-scale infringers, infringers operating out of numerous jurisdictions and foreign countries, and infringers using sophisticated technology to avoid detection, identification, and apprehension. By and large, individual copyright holders do not have the tools or ability to pursue these types of targets. Copyright holders do not have the legal tools or ability to tackle the organized criminal syndicates and most sophisticated infringers, but they have the tools and ability to target the individual infringer. While federal law enforcement has the tools, ability, expertise, and will to tackle the most sophisticated infringers, including those operating overseas who are part of a large syndicate and those using sophisticated technology to avoid detection, individual copyright holders have the tools to pursue individual infringers. Congress has provided for civil enforcement actions. Individual copyright holders, mostly through industry associations, have been very active in their pursuit of individual infringers using peer-to-peer applications. Focusing law enforcement and industry on their respective strengths results in maximum impact. By using both the criminal and civil tools given to law enforcement and industry by Congress, Justice can achieve a more significant impact. Technological limitations pose a challenge. Given the technology involved, it is challenging to gather the necessary evidence for a successful criminal prosecution of individuals using peer-to-peer applications. For example, it may be possible to prove that someone is offering copyrighted material for download through a peer-to-peer application; but, according to law enforcement officials, it is usually difficult or impossible to determine the number of times files were downloaded. Burden of proof in criminal prosecutions is more onerous. The criminal statute at issue requires proof of a willful intent and requires that each element of the offense be proven beyond a reasonable doubt. The willful intent is a higher burden than is found in most criminal statutes. By contrast, the intent element and overall burden of proof is significantly less onerous in civil enforcement. Statutory thresholds favor a federal criminal enforcement focus on the more significant targets. The thresholds require a retail value of $2,500 or more for the goods pirated by the infringer. With a valuation of $0.80 per song that is traded on a peer-to-peer application, federal criminal law enforcement could not be used to target individuals downloading fewer than 3,100 music files, for example. The technological limitations mentioned earlier, combined with the heightened burden of proof, make it challenging to show criminal violations for each of the more than 3,100 downloads. The need for efficient use of resources suggests a focus on large-scale sophisticated targets. The need for law enforcement to use resources efficiently suggests that federal law enforcement should focus their efforts in a way that yields the greatest impact. For many of the reasons detailed above, federal law enforcement has determined that they can make the biggest impact by focusing on the larger-scale, more sophisticated targets. According to Justice officials, the recently created Intellectual Property Task Force—headed by the Deputy Chief of Staff and Counselor to the Attorney General, and comprised of several of the highest-ranking department employees who have a variety of subject matter expertise—is charged with examining all aspects of how Justice handles intellectual property issues and with developing recommendations for legislative changes and future activities. One of the issues to be addressed by the task force is the most appropriate use of department resources to ensure that the department has the most effective enforcement strategy. Federal law enforcement officials did not identify any specific legislative barriers to investigation and prosecution of illegal file sharing on peer-to- peer networks. According to Justice officials, the department’s Intellectual Property Task Force will also recommend legislative changes, assuming there is a need for such changes. In providing comments on a draft of this report, the Deputy Assistant Attorney General, Criminal Division, Department of Justice, provided additional information on a recent international law enforcement effort against online piracy, coordinated by the department’s Computer Crime and Intellectual Property Section and the FBI, and presented a detailed description of the department’s policy on investigating and prosecuting intellectual property rights infringers on the Internet and on peer-to-peer networks. The Deputy Assistant Attorney General also noted that the department’s recently created Intellectual Property Task Force will examine how the department handles intellectual property issues and recommend legislative changes, if needed. We have incorporated this information into this report. We also received comments (via e-mail) from the unit chief of the Cyber Crime Center on behalf of DHS. The unit chief provided additional details on the number of investigations conducted by the Cyber Crime Center and clarified the center’s approach to investigations of individual copyright infringers. Specifically, the unit chief stated that, while the center targets individual violators who are involved in cyber intellectual property piracy on a profit or commercial basis, it does not pursue investigations of individual peer-to-peer file violators, due to the difficulties in meeting the statutory dollar-value threshold in peer-to-peer infringement cases and the lack of a profit motive. We have incorporated these details into this report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Chairmen and Ranking Minority Members of other Senate and House committees and subcommittees that have jurisdiction and oversight responsibility for Justice and DHS. We are also sending copies to the Attorney General and to the Secretary of Homeland Security. Copies will be made available to others on request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions concerning this report, please call me at (202) 512-6240 or Mirko J. Dolak, Assistant Director, at (202) 512-6362. We can also be reached by e-mail at koontzl@gao.gov and dolakm@gao.gov, respectively. Key contributors to this report were Jason B. Bakelar, Barbara S. Collier, Nancy E. Glover, Lori D. Martinez, Morgan F. Walts, and Monica L. Wolford. Our objectives were to describe (1) the views of major universities on the extent of problems experienced with student use of file-sharing software applications, as well as the actions that the universities are taking to deal with them and (2) the actions that federal enforcement agencies have taken to address the issue of copyright infringement on peer-to-peer networks, as well as agency views on any legislative barriers to dealing with these problems. To describe the views of college and university officials, we conducted structured interviews with a judgmental sample of large colleges and universities. The interview contained 35 questions referring to (1) the extent to which the college or university monitors its network or networks and the impact of the use of file-sharing applications on the network, (2) estimates of the number of students using file-sharing applications and the number of files shared or transferred over the network, (3) the discovery of nodes or mini-Napsters on the network and response of the university to their existence, (4) the discovery of file-sharing applications on the network and response of the university to their use, and (5) the actions taken by the college or university to address copyright infringement and the use of file-sharing applications on its networks. We pretested the content of the interview with chief information officers (CIO) of four major colleges and universities. During the pretest, we asked the CIOs to judge the following: how willing the CIOs would be to participate in the interview, particularly given the sensitive nature of some of the information requested; whether the meaning and intent of each question was clear and whether the CIOs were likely to know the information asked, and if the questions should be addressed to someone in a different position; and whether any of the questions were redundant. We made changes to the content and format of the final structured interview based on pretest results. To administer the structured interviews, we selected 45 colleges and universities from the Department of Education Integrated Postsecondary Education Data System. The colleges and universities were judgmentally selected from among large public and private degree-granting colleges and universities in each of eight geographic regions of the United States that provide Internet access to students in university administered housing. Of the 45 colleges and universities selected and contacted, 13 agreed to participate in the interview. We then analyzed the interview responses. Our analysis provides details on the responses of the 13 college and university officials we interviewed; however, because we did not randomly select interviewees, our results cannot be generalized to all colleges and universities. To describe federal law enforcement efforts and agency views related to copyright infringement on peer-to-peer networks, we analyzed budget and program documents from the Justice Computer Crime and Intellectual Property Section; the Federal Bureau of Investigation (FBI) Cyber Division; and the U.S. Immigration and Customs Enforcement’s Cyber Crimes Center, under the Department of Homeland Security. We also reviewed agency documents related to the efforts of other organizations that support the investigation and prosecution of copyright infringement, including the Department of State’s International Law Enforcement Academies; the Department of Commerce’s International Trade Administration; and the Intellectual Property Rights Coordination Center and the National Intellectual Property Law Enforcement Coordination Council. We performed our work between May 2003 and April 2004 in Washington, D.C. Our work was conducted in accordance with generally accepted government auditing standards. Peer-to-peer file-sharing programs represent a major change in the way Internet users find and exchange information. Under the traditional Internet client/server model, the access to information and services is accomplished by the interaction between users (clients) and servers— usually Web sites or portals. A client is defined as a requester of services, and a server is defined as the provider of services. Unlike the client/server model, the peer-to-peer model enables consenting users—or peers—to directly interact and share information with each other’s computer without the intervention of a server. A common characteristic of peer-to- peer programs is that they build virtual networks with their own mechanisms for routing message traffic. The ability of peer-to-peer networks to provide services and connect users directly has resulted in a large number of powerful applications being built around this model. Among the uses of peer-to-peer technology are the following: File sharing, which includes applications such as Napster and KaZaA, along with commercial applications such as NextPage. File-sharing applications work by making selected files on a user’s computer available for download by anyone else using similar software. Instant messaging, which includes applications that enable online users to communicate immediately through text messages. Commercial vendors include America Online, Microsoft, and Jabber. Distributed computing, which includes applications that use the idle processing power of many computers. The University of California– Berkeley’s SETI@home project uses the idle time on volunteers’ computers to analyze radio signal data. Collaboration applications, which enable teams in different geographic areas to work together and increase productivity. For example, the Groove application can access data on traditional corporate networks and on nontraditional devices such as personal digital assistants and handheld devices. As shown in figure 7, there are two main models of peer-to-peer networks: (1) the centralized model, based on a central server, or broker, that directs traffic between individual registered users and (2) the decentralized model, based on the Gnutella network, in which individuals find and interact directly with each other. As figure 7 shows, the centralized model relies on a central server/broker to maintain directories of shared files stored on the respective computers of the registered users of the peer-to-peer network. When user C submits a request for a file, the server/broker creates a list of files matching the search request by checking the request with its database of files belonging to registered users currently connected to the network. The broker then displays that list to user C, who can then select the desired file from the list and open a direct link with user D’s computer, which currently has the file. The download of the actual file takes place directly from user D to user C. The broker model was used by Napster, the original peer-to-peer network; it facilitated mass sharing of copyrighted material by combining the file names held by thousands of users into a searchable directory that enabled users to connect with each other and download MP3 encoded music files. The broker model made Napster vulnerable to legal challenges and eventually led to its demise in September 2002. Although Napster was litigated out of existence and its users fragmented among many alternative peer-to-peer services, most current-generation peer-to-peer networks are not dependent on the server/broker that was the central feature of the Napster services, so, according to Gartner, these networks are less vulnerable to litigation from copyright owners. In the decentralized model, no brokers keep track of users and their files. To share files using the decentralized model, user A starts with a networked computer equipped with a Gnutella file-sharing program, such as KaZaA or BearShare. User A connects to user B, user B to user C, user C to user D, and so on. Once user A’s computer has announced that it is “alive” to the various members of the peer network, it can search the contents of the shared directories of the peer network members. The search request is sent to all members of the network, starting with user B, who will each, in turn, send the request to the computers to which they are connected, and so on. If one of the computers in the peer network (for example, user D) has a file that matches the request, it transmits the file information (name, size, type, etc.) back through all the computers in the pathway toward user A, where a list of files matching the search request appears on user A’s computer through the file-sharing program. User A will then be able to open a connection with user D and download the file directly from user D’s computer. One of the key features of Napster and the current generation of decentralized peer-to-peer technologies is their use of a virtual name space. A virtual name space dynamically associates user-created names with the Internet address of whatever Internet-connected computer users happen to be using when they log on. The virtual name space facilitates point-to-point interaction between individuals, because it removes the need for users and their computers to know the addresses and locations of other users; the virtual name space can, to a certain extent, preserve users’ anonymity and provide information on whether a user is or is not connected to the Internet at a given moment. The file-sharing networks that result from the use of peer-to-peer technology are both extensive and complex. Figure 8 shows a map, or topology, of a Gnutella network whose connections were mapped by a network visualization tool. The map, created in December 2000, shows 1,026 nodes (computers connected to more than one computer) and 3,752 edges (computers on the edge of the network connected to a single computer). This map is a snapshot showing a network in existence at a given moment; these networks change constantly as users join and depart them. The emergence of the Internet as a principal medium for copyright infringement and other crimes has led to the development of new divisions within the federal government that are specifically trained to deal with cybercrime issues. These divisions, as well as other entities that are involved in combating copyright infringement, fulfill three main roles: investigation, prosecution, and support. The investigation role includes activities related to gathering and analyzing evidence related to suspected copyright infringement, while the prosecution role includes activities related to the institution and continuance of a criminal suit against an offender. The support role includes activities that are not directly involved in either investigation or prosecution, but which assist other organizations in these activities. Support activities include providing specialized training, producing reports specifically pertaining to intellectual property rights and copyright infringement, observing international trade agreements, and providing investigation leads and supporting evidence. U.S. Immigration and Customs Enforcement, Cyber Crimes Center. The Cyber Crimes Center, independently or in conjunction with Immigration and Customs Enforcement field offices, investigates domestic and international criminal activities conducted on or facilitated by the Internet. The organization’s responsibilities include investigating money laundering, drug trafficking, intellectual property rights violations, arms trafficking, and child pornography cases, and they provide computer forensics support to other agencies. For fiscal year 2002, the U.S. Customs Service referred 57 investigative matters related to intellectual property rights cases to the U.S. Attorneys Offices. Of these cases, 37 involving 54 defendants were resolved or terminated. FBI Cyber Division. The Cyber Division coordinates, supervises, and facilitates the FBI’s investigation of federal violations in which the Internet, computer systems, and networks are exploited as the principal instruments or targets of criminal, foreign intelligence, or terrorism activity and for which the use of such systems is essential to that activity. For fiscal year 2003, the Cyber Division investigated 596 cases involving intellectual property rights. Of these cases, 160 were related specifically to software copyright infringement and 111 were related to other types of copyright infringement. The results of these investigations include 92 indictments and 95 convictions/pretrial diversions. Computer Crime and Intellectual Property Section. The Computer Crime and Intellectual Property Section consists of 38 attorneys who focus exclusively on computer and intellectual property crime, including (1) prosecuting cybercrime and intellectual property cases; (2) advising and training local, state, and federal prosecutors and investigators in network attacks, computer search and seizure, and intellectual property law; and (3) coordinating international enforcement and outreach efforts to combat intellectual property and computer crime worldwide. Computer Hacking and Intellectual Property Units. Computer Hacking and Intellectual Property units are comprised of highly trained prosecutors and staff who are dedicated primarily to prosecuting high-tech crimes, including intellectual property offenses. There are 13 Computer Hacking and Intellectual Property units located in U.S. Attorneys Offices across the nation. Each unit is comprised of between four and six prosecutors and dedicated support staff. Computer and Telecommunication Coordinator Network. The Computer and Telecommunication Coordinator program consists of prosecutors specifically trained to address the range of novel and complex legal issues related to high tech and intellectual property crime, with general responsibility for prosecuting computer crime, acting as a technical advisor and liaison, and providing training and outreach. The Computer and Telecommunication Coordinator program is made up of more than 200 Assistant U.S. Attorneys, with at least one prosecutor who is part of the program in each of the 94 U.S. Attorneys Offices. U.S. Attorneys Offices. The U.S. Attorneys serve as the nation’s principal federal litigators under the direction of the U.S. Attorney General. U.S. Attorneys conduct most of the trial work in which the United States is a party and have responsibility for the prosecution of criminal cases brought by the federal government, the prosecution and defense of civil cases in which the United States is a party, and the collection of debts owed the federal government which are administratively uncollectible. There are 94 U.S. Attorneys stationed throughout the United States, Puerto Rico, the Virgin Islands, Guam, and the Northern Mariana Islands. For fiscal year 2002, the U.S. Attorneys Offices received 75 referrals involving investigative matters for Title 18, U.S.C., Section 2319—Criminal Infringement of a Copyright—and 28 cases involving 56 defendants were resolved or terminated. U.S. Immigration and Customs Enforcement, Intellectual Property Rights Coordination Center. The Center is a multiagency organization that serves as a clearinghouse for information and investigative leads provided by the general public and industry, as well as being a channel for law enforcement to obtain cooperation from industry. The Criminal Division, through its Overseas Prosecutorial Development, Assistance and Training Office and its International Criminal Investigation Training Assistance Programs, provides training and assistance to foreign law enforcement and foreign governments to foster the robust protection of intellectual property rights in foreign countries. Through its legal attaches located in foreign countries, the FBI fosters the protection of intellectual property rights in foreign countries and assists U.S. prosecutions of intellectual property violations that have foreign roots. International Law Enforcement Academies. The academies foster a cooperative law enforcement partnership and involvement between the U.S. and participating nations to counter the threat of international crime within a specific region. The academies develop foreign police managers’ abilities to handle a broad spectrum of contemporary law enforcement issues, including specialized training courses in fighting intellectual property rights crime, and increases their capacity to investigate crime and criminal organizations. As of 2003, academies were operating in Roswell, New Mexico; Budapest, Hungary; Bangkok, Thailand; and Gaborone, Botswana. International Trade Administration. The administration monitors foreign governments’ compliance and implementation with international trade agreements, especially those pertaining to intellectual property rights enforcement. National Intellectual Property Law Enforcement Coordination Council. The Council’s mission is to coordinate domestic and international intellectual property law enforcement among federal and foreign entities, including law enforcement liaison, training coordination, industry and other outreach, and to increase public awareness. The Council consists of members from several agencies, including the Director of the U.S. Patent and Trademark Office (co-chair); the Assistant Attorney General of the Department of Justice’s Criminal Division (co-chair); the Undersecretary of State for Economics, Business, and Agricultural Affairs; the Deputy U.S. Trade Representative; the Commissioner of Customs; and the Undersecretary of Commerce for International Trade. The council is required to report annually on its coordination activities to the President and to the Appropriations and Judiciary Committees of the House and Senate. A file-sharing program for Gnutella networks. BearShare supports the trading of text, images, audio, video, and software files with any other user of the network. In the peer-to-peer environment, an intermediary computer that coordinates and manages requests between client computers. A networking model in which a collection of nodes (client computers) request and obtain services from a server node (server computer). A file-sharing program based on the Gnutella protocol. Gnutella enables users to directly share files with one another. Unlike Napster, Gnutella- based programs do not rely on a central server to find files. Decentralized group membership and search protocol, typically used for file sharing. Gnutella file-sharing programs build a virtual network of participating users. A popular method of Internet communication that allows for an instantaneous transmission of messages to other users who are logged into the same IM service. America Online’s Instant Messenger and the Microsoft Network Messenger are among the most popular instant messaging programs. IP address. A number that uniquely identifies a computer connected to the Internet to other computers. A file-sharing program using a proprietary peer-to-peer protocol to share files among users on the network. Through a distributed self-organizing network, KaZaA requires no broker or central server like Napster. A file-sharing program running on Gnutella networks. It is open standard software running on an open protocol and is free for public use. Moving Pictures Experts Group (MPEG) MPEG-1 Audio Layer-3. A widely used standard for compressing and transmitting music in digital format across Internet. MP3 can compress file sizes at a ratio of about 10:1 while preserving sound quality. A computer or a device that is connected to a network. Every node has a unique network address. A network node that may function as a client or as a server. In the peer-to- peer environment, peer computers are also called servents, since they perform tasks associated with both servers and clients. A computer that interconnects client computers, providing them with services and information; a component of the client-server model. A Web server is one type of server. Search for extraterrestrial intelligence at home. A distributed computing project, SETI@home uses data collected by the Arecibo Telescope in Puerto Rico. The project takes advantage of the unused computing capacity of personal computers. As of February 2000, the project encompassed 1.6 million participants in 224 countries. The general structure—or map—of a network. It shows the computers and the links between them. Having the properties of x while not being x. For example, “virtual reality” is an artificial or simulated environment that appears to be real to the casual observer. virtual name space (VNS) Internet addressing and naming system. In the peer-to-peer environment, VNS dynamically associates names created by users with the IP addresses assigned by their Internet services providers to their computers. A worldwide client-server system for searching and retrieving information across the Internet. Also known as WWW or the Web.
The emergence of peer-to-peer file-sharing applications that allow networks to share computer files among millions of users has changed the way copyrighted materials, including digital music, videos, software, and images can be distributed and has led to a dramatic increase in the incidence of copyright infringement (piracy) of these digital materials. These applications enable direct communication between users, allowing users to access each other's files and share digital music, videos, and software. According to a coalition of intellectual property owners in the entertainment industry, an increasing number of students are using the fast Internet connections offered by college and university networks to infringe copyrights by illegally downloading and sharing massive volumes of copyrighted materials on peer-to-peer networks. GAO was asked to describe (1) the views of major universities on the extent of problems experienced with student use of file-sharing applications as well as the actions that the universities are taking to deal with them and (2) the actions that federal enforcement agencies have taken to address the issue of copyright infringement on peer-to-peer networks as well as agency views on any legislative barriers to dealing with the problems. The college and university officials we interviewed are aware of the use of file-sharing applications on their networks, almost all of them have experienced some problems and increased costs as a result of the use of these applications, and they are taking steps to reduce the use of these applications on their networks. All of the officials interviewed indicated that their colleges or universities routinely monitor their networks, and most of them indicated that the institutions also actively monitor their networks specifically for the use of these file-sharing applications. When infringing use is discovered, all of the representatives stated that enforcement actions are taken against the individuals responsible. These actions included issuing a warning to the user or users, banning them from the network for a period of time, and managing the bandwidth available for a group of users. Federal law enforcement officials have been taking action to investigate and prosecute organizations involved in significant copyright infringement. These groups use a wide range of Internet technologies to illegally distribute copyrighted materials over the Internet. Federal law enforcement officials did not identify any specific legislative barriers to investigation and prosecution of illegal file sharing on peer-to-peer networks. According to the Department of Justice officials, the department's recently created Intellectual Property Task Force will examine how the department handles intellectual property issues and recommend legislative changes, if needed.
HUD’s mission is to create strong, sustainable, inclusive communities and quality, affordable homes for all. To carry out this mission, the department administers community and housing programs that affect millions of households each year. These programs provide affordable rental housing opportunities and help homeless families and chronically homeless individuals and veterans. The department also administers mortgage insurance programs for single-family housing, multifamily housing, and health care facilities. IT plays a critical role in HUD’s ability to perform its business functions, which involve the management of billions of dollars to carry out its mission. For example, the department’s IT environment consists of multiple systems that, among other things, are intended to help coordinate interactions with lending institutions to insure mortgages, collect and manage state and local housing data, process applications for community development, and process vouchers for different rental assistance programs. Its systems also support the processing of applications for, and the management of, more than 50 grant programs administered by the department. However, according to HUD, its IT environment has not been sufficient to effectively support business functions because its systems are overlapping and duplicative, not integrated, necessitate manual workloads, and employ antiquated technologies that are costly to maintain. The department has reported that its environment consists of: over 200 information systems, many of which perform the same function and, thus, are overlapping and duplicative; stove-piped, nonintegrated systems that result in identical data existing in multiple systems; manual processing for business functions due to a lack of systems to support these processes; and antiquated technology (15 to 30 years old) and complex systems that are costly to maintain. To address challenges with its IT environment, HUD has developed a number of plans in recent years to guide its modernization efforts. These include plans that outline how it intends to spend IT funds, an information resource management strategic plan, and an enterprise architecture roadmap. These plans contain information related to the department’s modernization efforts and actions aimed at improving its capacity to manage IT. Nevertheless, even with these plans and ongoing modernization efforts, the department reported in November 2016 that limited progress had been made in replacing legacy systems and manual processes with modern applications and enhanced capabilities. HUD’s IT budget covers two categories of spending: (1) operations and maintenance of existing systems and (2) new investments for modernization (often referred to as development, modernization, and enhancement). Operations and maintenance funds refer to the expenses required for general upkeep of the department’s existing systems. Funds for modernization support projects and activities that lead to new systems, or changes and modifications to existing systems that substantively improve capability or performance to better support HUD’s mission and business functions. According to the Office of Management and Budget’s (OMB) IT Dashboard, over the past 5 years, the department spent between approximately 70 and 95 percent of its total IT budget on operations and maintenance; it dedicated a smaller portion—ranging from approximately 5 to 30 percent—to modernization efforts. Figure 1 illustrates the percentage of HUD’s IT spending during fiscal years 2012 through 2016, dedicated to operating and maintaining existing IT versus modernization efforts, as reported on the IT Dashboard. Consistent with prior years, a majority of HUD’s fiscal year 2017 IT budget request is intended to support existing systems. Specifically, the department requested $286 million, of which approximately 87 percent ($250 million) is planned for operations and maintenance. According to the budget request, the department anticipates using operations and maintenance funds to support business administrative functions as well as its IT infrastructure, which includes servers, communications, equipment and support, desktops, mobile devices, and security. Of the fiscal year 2017 IT budget request, approximately 13 percent ($36 million) is intended to support modernization investments aimed at improving the department’s IT environment. According to HUD’s budget request, these funds are to support new investments intended to deliver modernized enterprise capabilities that better support the department’s mission. Specifically, these investments are expected to, among other things, leverage enterprise-level technology, reduce the number of stand- alone systems, deliver cloud-based technologies, automate manual processes, and consolidate data. Of the $36 million requested for modernization, approximately 81 percent ($29 million) was identified to support four modernization efforts, which were in various phases of planning and development. Table 1 provides a description of these investments and the amounts of their associated fiscal year 2017 budget requests. Our prior work has shown that implementing repeatable, disciplined processes that adhere to federal law and best practices can help agencies effectively plan, manage, and oversee modernization efforts. Disciplined processes include establishing guidance that can be used for developing reliable cost estimates that project realistic life-cycle costs. Such estimates are critical to a modernization effort’s success because they can be used to support key investment decisions that help to ensure finite resources are wisely spent. In addition, a reliable estimate is the foundation of a good budget and budget spending plan, which outlines how and at what rate an investment’s funding will be spent over time. Put another way, reliable cost estimates are essential for successful IT investments and modernization efforts because they help ensure that Congress and the department itself can have reliable information on which to base funding and budgetary decisions. OMB calls for federal agencies to maintain current cost estimates that encompass the full life-cycle of an investment. Building on OMB’s requirements and drawing on practices promulgated by federal cost estimating organizations and private industry, GAO’s Cost Guide identifies cost estimating practices that, if followed correctly, have been found to be the basis for a reliable cost estimate. An estimate created using these practices exhibits four broad characteristics: it is comprehensive, well-documented, accurate, and credible. Moreover, each characteristic is associated with a specific set of best practices. Table 2 summarizes, by characteristic, the best practices for developing reliable cost estimates identified in the Cost Guide. Because specific and discrete best practices underlie each characteristic, an agency’s performance in each of the characteristics can vary. For example, an organization’s cost estimates could be found to be comprehensive and well-documented, but not accurate or credible. According to the Cost Guide, a cost estimate is considered reliable if each of the four characteristics is substantially or fully met; in contrast, if any of the characteristics are not met, minimally met, or partially met, the cost estimate cannot be considered reliable. The cost estimates that HUD developed for each of the four selected investments exhibited significant weaknesses in that they did not meet or substantially meet best practices for each characteristic. As such, the estimates were unreliable and did not provide a sound basis for informing the department’s investment and budgetary decisions. Specifically, none of the estimates exhibited all of the characteristics of a reliable estimate, as they were not substantially or fully comprehensive, well-documented, accurate, and credible. Only one estimate—for the Customer Relationship Management investment—more than minimally met best practices associated with any of the four characteristics because it partially met the practices for a comprehensive and accurate estimate. The remaining three investments minimally met or did not meet the best practices associated with the four characteristics. For example, the Enterprise Data Warehouse estimate minimally met all four characteristics; the Enterprise Voucher Management System estimate did not meet the characteristic for being accurate, and minimally met the other three characteristics; and the Federal Housing Administration Automation and Modernization estimate did not met the characteristic for being credible, while minimally meeting the rest of the characteristics. Table 3 provides a summary of the extent to which the four investments’ cost estimates were comprehensive, well-documented, accurate, and credible. Comprehensive. Of the cost estimates for the four selected investments, none were comprehensive. While one investment partially met associated best practices, the remaining three investments minimally met these practices. Specifically, although all of the estimates included costs for specific elements and phases of the investments, none of the estimates included both government and contractors costs of the investment over the life-cycle, from inception through design, development, deployment, and operations and maintenance, to retirement of the investment. In addition, the Customer Relationship Management investment partially met the best practice related to defining the investment by, for example, explaining that the effort would result in a cloud-based solution that allowed the department to consolidate existing systems. However, none of the estimates fully met this practice, in part because investment documentation did not completely define system requirements, reflect current schedules, and demonstrate that efforts were technically reasonable. Further, work breakdown structures that had been developed were not sufficiently detailed for any of the investments to ensure that cost elements were neither omitted nor double counted, and allowed for traceability between the investment’s costs and schedule by deliverable, such as hardware or software components. Moreover, although various assumptions were factored into the cost estimate for the Customer Relationship Management investment, the basis for the assumptions was not documented and, as a result, their reasonableness could not be determined. For the remaining three investments, where information was limited and judgments had to be made, the estimates did not contain cost-influencing ground rules and assumptions. Well-documented. The four investments did not develop well- documented cost estimates because none were supported by detailed documentation that described how the estimate was derived and how the expected funding would be spent. This characteristic’s best practices were minimally met by all of the investments. In discussing the estimating methodologies used to develop the estimates, HUD officials reported using analogy, expert opinion, parametric, and other methods; however, the department did not document the specific methodologies used for any of the investments. Specifically, HUD did not adequately document for each estimate the sources of data used, any assessments of data accuracy and reliability, and other circumstances affecting the data, such as the details related to calculations performed. The estimating information for the investments also was not captured in a way that the data used could be easily replicated and updated. Further, the documentation did not sufficiently discuss the technical baseline—which is intended to serve as the basis for developing an estimate by providing a common definition of the investment—or how the data were normalized. In addition, while HUD officials reported briefing management on each of the investments and on the high-level estimate for the Enterprise Voucher Management System, the department did not brief management on the ground rules and assumptions underlying the estimate. A cost estimate is not considered valid until management has approved it, yet the department did not provide documentation that any of the four cost estimates had been reviewed and accepted by management. Accurate. Overall, the four estimates were not accurate because only one estimate partially addressed the best practices associated with this characteristic, two estimates minimally addressed the best practices, and one estimate did not meet any of the associated practices. As such, we could not determine whether the cost estimates provided results that were unbiased and were not overly conservative or optimistic. More specifically, the estimate for the Customer Relationship Management investment partially addressed the best practices in that its calculations did not contain errors and actual costs from existing programs or historical data were used to develop the estimate. However, analyses had not been performed for any of the investments to ensure that the cost estimates were based on an assessment of most likely costs. In addition, none of the investments’ estimates had been properly adjusted for inflation to ensure that cost data were expressed in consistent terms, which is important because doing so can help to prevent cost overruns. For example, officials for the Federal Housing Administration Automation and Modernization investment stated that the cost data had not been adjusted for inflation or normalized to constant year dollars to remove the effects of inflation. Further, the estimating techniques used to determine costs were not documented, which prevented an assessment of the accuracy of any calculations performed. Finally, while HUD officials responsible for the investments stated that the estimates were grounded in historical data, such as actual costs from comparable investments, they did not provide evidence to support this assertion for all four investments. Credible. Three of the investments minimally met the best practices associated with developing a credible cost estimate and one investment did not meet the practices. Specifically, the estimates did not fully discuss limitations of the analyses because of uncertainty or biases surrounding the data or assumptions. For example, none of the investments conducted a sensitivity analysis, which is intended to reveal how the cost estimate is affected by a change in a single assumption and allows for the cost estimator to understand which variable most affects the cost estimate. In addition, risk and uncertainty analyses were not conducted in a manner that conformed to the practices in the Cost Guide. For example, while HUD officials responsible for Enterprise Data Warehouse and Customer Relationship Management investments stated that risks were evaluated, evidence supporting these assertions was not provided. In addition, risk documentation was provided for the Federal Housing Administration Automation and Modernization investment, but the analysis was limited to a portion of the investment and, therefore, did not provide a comprehensive view of the level of risk and the degree of uncertainty associated with the estimate. Moreover, department officials stated that cross checks were conducted for three of the investments’ estimates to determine whether applying other methods produced similar results; however, evidence was not provided to demonstrate how this was done. Lastly, an independent cost estimate developed by a group outside of the acquiring organization was not conducted to validate the reasonableness of the cost estimates developed for the four investments. (Additional details on our assessments of the four investments’ cost estimates can be found in appendix I.) The significant weaknesses in the cost estimates can largely be attributed to the department’s lack of established guidance for developing reliable cost estimates, which is contrary to our prior work related to disciplined processes. HUD officials responsible for the selected investments’ estimates stated that department guidance had not yet been established and that IT investments were not required to develop estimates that exhibit the four characteristics of a reliable estimate. As a result, according to these officials, cost estimating practices are inconsistently implemented across the department and are decentralized because of the reliance on the efforts and experience of various subject matter experts and contractors. With regard to improving its cost estimating practices, in January 2014, the department began efforts to develop cost estimating guidance by conducting an internal review of approaches used for developing estimates. Following this review, the department drafted guidance in June 2015 that was intended to conform to best practices in the Cost Guide. In August 2016, officials from the Offices of Strategic Planning and Management, the Chief Financial Officer, and the Chief Information Officer stated that HUD was in the process of further developing the guidance to reflect cost estimating best practices. However, as of December 2016, the guidance had not yet been established. According to the officials, finalizing the guidance has been a challenge due to competing priorities within the department, such as addressing weaknesses in its governance structure and management processes. Additionally, these officials stated that the department’s focus has been on establishing an infrastructure that is expected to enable implementation of better cost estimating practices. Moving forward, the officials stated that they expect to continue efforts to finalize and establish the guidance in the future, although time frames for doing so had not been determined. Until HUD establishes guidance that calls for the implementation of best practices identified in the Cost Guide, the department is less likely to develop reliable cost estimates for its IT investments that can serve as the basis for informed investment decision making. In going forward without addressing the weaknesses identified in this report, the department risks being unable to effectively estimate funding needs for IT investments and using unreliable data to make budgetary decisions. While it is critical that HUD’s IT investments develop cost estimates that provide Congress and the department reliable information on which to make decisions, the cost estimates for the four selected investments had significant weaknesses. Specifically, none of the cost estimates for these investments were reliable because they did not fully or substantially implement best practices that result in exhibiting characteristics of being comprehensive, well-documented, accurate, and credible. Many of the weaknesses found in the investments can be attributed to the lack of established cost-estimating guidance, which the department has not yet finalized because it has focused on addressing management weaknesses and taking action to establish an infrastructure to support improved cost estimation practices. Until HUD finalizes and ensures the implementation of guidance to improve its cost estimating practices, the department is at risk of continuing to make investment decisions based on unreliable information. To increase the likelihood that its IT investments develop reliable cost estimates, we recommend that the Secretary of HUD finalize, and ensure the implementation of, guidance that incorporates the best practices called for in the GAO Cost Estimating and Assessment Guide. We received written comments on a draft of this report from HUD, which are reprinted in appendix II. In its comments, the department agreed with our recommendation and indicated that it plans to take action in response. HUD also provided technical comments, which we incorporated as appropriate. Among these comments, the department took issue with our conclusion that cost estimation had not been a priority and stated that HUD had been focused on establishing an infrastructure so that it could improve its cost estimating practices. We revised our conclusion to reflect the department’s actions in this regard. We are sending copies of this report to the appropriate congressional committees, the Secretary of Housing and Urban Development, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6304 or melvinv@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The following tables summarize our assessments, for each selected IT investment, regarding the extent to which each met the characteristics of a reliable cost estimate—comprehensive, well documented, accurate, and credible. Specifically, our assessments identified whether the investments estimates met, substantially met, partially met, minimally met, or did not meet each of the four characteristics and provides key examples of the rationale. The Customer Relationship Management investment’s cost estimate partially met best practices for developing a comprehensive and accurate estimate and minimally met best practices for developing a well- documented and credible estimate. The Enterprise Data Warehouse investment’s cost estimate minimally met best practices for developing a comprehensive, well-documented, accurate, and credible estimate. The Enterprise Voucher Management System investment’s cost estimate minimally reflected best practices for developing a comprehensive, well- documented, and credible estimate and did not reflect best practices for developing an accurate cost estimate. The Federal Housing Administration Automation and Modernization investment’s cost estimate minimally met best practices for developing a comprehensive, well documented, and accurate estimate and did not meet best practices for developing a credible cost estimate. In addition to the contact named above, Teresa M. Yost (Assistant Director), Donald Baca (Analyst-in-Charge), Brian Bothwell, Kami Brown, Rebecca Eyler, Amanda Gill, and Karen Richey made key contributions to this report.
HUD relies extensively on IT to deliver services and manage programs in support of its mission. For fiscal year 2017, HUD requested $36 million for IT investments intended to deliver modernized enterprise-level capabilities that better support the department's mission. Critical to the success of such efforts is the department's ability to develop reliable cost estimates that project life-cycle costs and provide the basis for, among other things, informed decision making and realistic budget formulation. The joint explanatory statement that accompanied the Consolidated and Further Continuing Appropriations Act, 2015, included a provision for GAO to evaluate HUD's cost estimating practices. This review determined the extent to which HUD implemented cost estimating best practices for selected IT investments. GAO selected four IT modernization investments with the largest portion of requested funding for fiscal year 2017, interviewed relevant agency officials, and analyzed and compared each investment's cost estimate to best practices in the Cost Guide . This guide states that, when most or all of the practices are “fully” or “substantially” met, an estimate is considered reliable. The cost estimates that the Department of Housing and Urban Development (HUD) developed for the four selected information technology (IT) investments were unreliable and, thus, lacked a sound basis for informing the department's investment and budgetary decisions. GAO's Cost Estimating and Assessment Guide ( Cost Guide ) defines best practices that are associated with four characteristics of a reliable estimate—comprehensive, well documented, accurate, and credible. However, none of the cost estimates for the selected investments exhibited all of these characteristics. Only one estimate—for the Customer Relationship Management investment—more than minimally met best practices associated with any of the four characteristics because it partially met the practices for a comprehensive and accurate estimate. The remaining three investments minimally or did not meet the best practices associated with the four characteristics. For example, the Enterprise Data Warehouse estimate minimally met all four characteristics; the Enterprise Voucher Management System estimate did not meet the characteristic for being accurate and minimally met the other three characteristics; and the Federal Housing Administration Automation and Modernization estimate did not meet the characteristic for being credible, while minimally meeting the remaining characteristics (see table). The significant weaknesses in the cost estimates for the selected investments can largely be attributed to the department's lack of guidance for developing reliable cost estimates. HUD officials responsible for the selected investments stated that the department had not required the development of estimates that exhibit the four characteristics of a reliable estimate. As a result, according to these officials, cost estimating practices have been decentralized and inconsistent across the department. While HUD drafted guidance in June 2015 that was intended to conform to the best practices in GAO's Cost Guide , the department has not yet finalized the guidance because it has focused on establishing the infrastructure needed to support improved cost estimation practices. Until HUD finalizes and ensures the implementation of guidance to improve its cost estimating practices, the department is at risk of continuing to make investment decisions based on unreliable information. To improve cost estimating practices, GAO recommends that HUD finalize and implement guidance that incorporates best practices called for in the Cost Guide . HUD concurred with this recommendation.
In 2002, Congress passed legislation renewing the President’s ability to enter into certain trade agreements and submit implementing bills on an expedited legislative track without possibility of amendment. The bill granting this “fast track” authority, renamed TPA, passed the House by one vote amid contentious debate, with a noticeable split along party lines. Although delegation of the constitutional authority to “regulate commerce with foreign nations” dates back to 1934 and some form of fast track authority was granted by Congress to every president since 1974, the Bipartisan Trade Promotion Authority Act of 2002 restored this authority after an 8-year hiatus. Congress accompanied this grant with statutorily defined objectives for the trade negotiations and requirements that the administration consult with Congress and other stakeholders before, during, and after the negotiations. If Congress decided that the President hadn’t satisfied his or her obligations to consult under TPA, the implementing legislation could be treated like any other bill. Congress has applied TPA procedures to every implementing bill submitted under the Trade Act, according to USTR. Additional information about the history of the consultation requirements can be found in appendix II. TPA also requires the administration to consult with private sector advisory committees. It continues the advisory committee system established under the Trade Act of 1974, which was intended to ensure that representatives from private business and other groups with a stake in trade policy could provide input before, during, and after negotiations. The system has a three-tier structure of committees to advise the President on (1) overall trade policy, (2) general policy areas, and (3) technical aspects of trade agreements. The law requires the President to consult with these committees on a continuing and timely basis. Each advisory committee must submit a report to Congress and the President on each trade agreement negotiated under TPA no later than 30 days after the President notifies Congress of his or her intent to enter into the agreement. The system comprises about 700 advisors across 28 committees broadly representative of the U.S. economy and various trade policy interests. The Trade Act of 1974 also requires USTR to provide an opportunity to private organizations or groups outside the advisory committee system to present their views on trade issues. To comply with this requirement, USTR publishes a Federal Register notice and the Trade Policy Staff Committee conducts a hearing. The public can comment on any matter relevant to the proposed agreement in response to the Federal Register notice, either in writing or at the public hearing. USTR also consults with groups outside of these mechanisms; sometimes USTR is contacted, and sometimes USTR seeks out comments. We reported on the trade advisory committee system in 2002 and found that it has made valuable contributions to U.S. trade policy and agreements. We also found, however, that consultations were not always timely or useful and that the process needed greater accountability. Furthermore, we found that committee structure and composition had not been updated to reflect changes in the U.S. economy and trade policy. In response to these findings, USTR and the other managing agencies have taken several actions, including the installation of a secure Web site for viewing draft agreement text; reconfiguration of the committee system; introduction of a monthly teleconference of chairs; and introduction of periodic plenary sessions for the third tier technical committees. TPA was actively used by the President. In addition to pursuing numerous FTAs, a global round of trade liberalization talks at the WTO launched in November 2001 was subsequently notified under TPA. WTO talks made some progress, but they were not concluded by the July 1, 2007, deadline TPA set for an agreement to qualify. The President has called on Congress to renew TPA, in part to continue pursuit of WTO talks in hopes of achieving fundamental global agriculture reform and meaningful reduction in trade barriers to goods and services worldwide. Some in Congress are supportive, but others are skeptical, making an examination of recent experience under TPA timely. Meanwhile, FTA negotiations with Malaysia have continued despite the lapse in TPA. Congress must also decide whether to approve the last four FTAs concluded under TPA—with Peru, Colombia, Panama, and South Korea. Since the passage of TPA in 2002, the United States has pursued negotiations towards 17 comprehensive FTAs covering 47 countries. FTA partner countries were selected for a variety of foreign and economic policy reasons. The United States followed a strategy of competitive liberalization, which entails simultaneously pursuing bilateral, multilateral, and global trade agreements. Furthermore, the United States only pursued comprehensive FTAs, but a number of large trading partners were unwilling to negotiate on sensitive topics such as agriculture in FTAs. In the 5-year period that TPA was granted to the President, from 2002- 2007, the administration pursued negotiations toward 17 FTAs with 47 countries. These 47 countries extend from North America to South America to the Pacific Rim to the Middle East. (See table 1.) Six FTAs have been approved and are in force. An additional 4 FTAs with 4 countries have been signed but not yet approved, and FTAs with Costa Rica and Oman have been signed and approved by the U.S. Congress but are not yet in force. Furthermore, an FTA with Malaysia is currently under negotiation, and negotiations for the remaining 5 FTAs are not yet concluded. The United States has negotiated comprehensive FTAs for a variety of foreign and economic policy reasons. Agency officials confirmed that since mid-2004, FTA partners have been judged on six criteria outlined by the National Security Council, as GAO reported in 2004. These criteria are as follows: benefits to the broader trade liberalization strategy, compatibility with U.S. interests, congressional/private sector support, and U.S. government resource constraints. According to officials we interviewed, these criteria are broad and, as a result, the administration has considerable discretion in choosing potential FTA partners. Among the foreign policy considerations for selecting FTA partners are the strengthening of strategic relationships and the promotion of reform in partner countries. In addition to the first two criteria mentioned above of assessing country readiness as well as the economic/commercial benefit, forming regional trading blocks and replacing trade preference programs were among the economic policy factors. Agency officials told us that establishing trading relationships with strategic friends and allies was a key factor in deciding with whom to enter into FTA negotiations. Particularly following the September 11 terrorist attacks and the onset of the Iraq war, pursuing FTAs with moderate Muslim countries became a significant policy goal. In May 2003, the President announced a Middle East Free Trade Initiative, which lays out a plan of graduated steps for Middle Eastern nations to increase trade and investment with the United States. Under this initiative, the United States has entered into FTAs with Morocco and Bahrain and has approved an FTA with Oman. This is in addition to the FTAs the United States already had with Israel and Jordan. USTR indicated in its November 2004 letter of intent to enter into an FTA with Oman, for example, that Oman, as a member of the Gulf Cooperation Council, will “continue to be an important strategic colleague on a broad array of foreign and national security issues.” The United States has sought to strengthen strategic relationships through FTAs in other regions as well. For example, the November 2003 letter of intent to enter into an FTA with Panama indicated that “an FTA will serve to strengthen not only economic ties but also political and security ones.” Another foreign policy goal in selecting FTA partners was promoting economic and political reform in partner countries. Public statements regarding the FTA with Morocco, for example, suggest that it would add momentum to political reform already under way there. USTR also stated that the Central America—Dominican Republic Free Trade Agreement (CAFTA-DR) will strengthen “free-market reforms” in Central America, adding that “the growth stimulated by trade and the openness of an agreement will help deepen democracy, the rule of law, and sustainable development.” Public documents related to the Andean FTA initiative state that an Andean FTA would “enhance our efforts to strengthen democracy and support for fundamental values in the region” such as rule of law, sustainable development, transparency, anti- corruption, and good governance. USTR also indicated that an FTA with South Korea would promote enhanced regulatory transparency in a top U.S. trade partner. Beyond assessing a country’s readiness and potential economic/commercial benefits, USTR publications and interviews with senior agency officials suggest that sequencing from previous FTAs and building toward larger regional initiatives were considerations for entering into negotiations with a number of countries. For example, as the U.S.- Chile FTA negotiations were drawing to a close, the United States announced its intent to enter into FTA negotiations with the Central American countries of Costa Rica, El Salvador, Guatemala, Honduras, and Nicaragua under CAFTA and later announced its intent to include the Dominican Republic in those negotiations. USTR also submitted a letter of intent to enter into FTA negotiations with the Andean countries of Colombia, Peru, Ecuador, and Bolivia under the Andean FTA. The individual letters of intent to enter in FTA negotiations with these ten countries cited the complementary nature of these negotiations, which would lend momentum to concluding the Free Trade Area of the Americas (FTAA). Agency officials explained that when the United States pursues individual bilateral FTAs, one goal is to enable them to be woven into regional agreements under the mantle of broader integration. The individual letters of intent to enter into FTA negotiations with Thailand and Malaysia also cited the Enterprise for ASEAN Initiative as a factor in the selection process, building upon the U.S.-Singapore FTA. The administration envisioned similar regional or subregional trading groups for the Middle East (the Middle East Free Trade Area) and South Africa (SACU, Southern African Customs Union). (See fig. 1.) For certain developing country FTA partners, one motivation in U.S. selection was converting one-way U.S. trade preference programs into two-way reciprocal agreements. Agency officials explained that under preference programs such as the Generalized System of Preferences and the Caribbean Basin Initiative, developing countries have preferential duty-free access to the American market without having to reciprocate; by entering into FTAs with them, the United States “levels the playing field” by gaining market access in these countries. Public documents related to the SACU trade negotiations, for example, noted an opportunity to replace the African Growth and Opportunity Act trade preference program with an FTA for several of the partner countries. The Andean FTA was motivated in part by a desire to replace the Andean Trade Preference Agreement, while the CAFTA-DR was motivated in part by a desire to replace a major portion of the Caribbean Basin Initiative. Furthermore, USTR officials noted that transitioning from unilateral trade preferences to reciprocal trade agreements would deepen existing regional integration. USTR identified section 202(b) of the United States-Caribbean Basin Trade Partnership Act as an example of how this evolution was also a goal that Congress shared. As WTO negotiations have repeatedly stalled in the face of wide substantive differences, particularly over agriculture, the United States has sought to continue to be active in pursuing trade agreements at other levels, even prior to TPA. As part of its mission to play the leading role in developing and coordinating U.S. trade policy, USTR pursued trade agreements throughout the 1990s not only at the global level through the WTO, but also at the bilateral and multilateral levels, such as through the U.S.-Jordan FTA and the North American Free Trade Agreement (NAFTA). Since TPA passage, USTR officials stressed that they would simultaneously pursue bilateral, multilateral, and global trade agreements under a strategy referred to as “competitive liberalization,” or more recently as “complementary liberalization.” This competitive liberalization strategy linked trade policy to foreign policy, security policy, and commercial policy goals. Although still committed to liberalization on a global front, working in parallel with the WTO framework offered an opportunity to keep the concept of achieving liberalization moving forward despite setbacks at the global level. Competitive liberalization had the dual goal of providing momentum for global trade liberalization and providing an alternative if global trade talks failed to progress. As former U.S. Trade Representative Robert Zoellick explained in 2002, “we will not passively accept a veto over America’s drive to open markets. We want to encourage reformers who favor free trade. If others do not want to move forward, the United States will move ahead with those who do.” Agency officials say that, due to its importance to the global trading system and the potential of more significant and broad-based economic gains, the successful completion of global trade agreements such as the WTO Doha Round is the administration’s ultimate goal and that FTAs were intended to serve as a stepping stone to that goal since they can provide a substantial demonstration effect. U.S. Trade Representative Susan Schwab said in her May 2006 Senate confirmation hearing that pursuing FTAs helps “to establish the breadth and scope of potential multilateral agreements in years to come by setting precedents and by demonstrating the real benefits of free and fair trade.” For example, according to administration officials, signing NAFTA contributed to moving the last (Uruguay) round of global trade talks creating the WTO to conclusion. Additionally, FTAs were seen as a tool to strengthen relationships with trading partners similarly seeking progress in global liberalization. In the letter announcing its intent to negotiate an FTA with Australia, for example, the USTR stated “we believe that an FTA would further unite and strengthen the alliance of countries leading the effort toward global trade liberalization.” The second goal of competitive liberalization was to provide an alternative venue for pursuing trade liberalization as WTO talks lagged. However, whereas competitive liberalization sought to pressure other countries to agree to tariff and subsidy cuts in the WTO, complementary liberalization sees the simultaneous pursuit of FTAs and WTO negotiations as a mutually reinforcing effort. Former U.S. Trade Representative Robert Portman explained in 2006 that, “where we have a free trade agreement, we find we have…the ability to have a better relationship on the multilateral issues it’s relatively easy on the global stage…to find some solutions.” Closely tied to the strategy of competitive/complementary liberalization is the strategy of pursuing only highly comprehensive “gold standard” bilateral and regional FTAs. Such agreements have a number of absolute requirements, based on the model USTR seeks to use. NAFTA was the original “model,” although requirements have evolved with time and with different regions. USTR insists, for example, that partners accept the inclusion of agriculture, as well as a “negative list” approach to services, because they believe this will provide greater liberalization and lessen impediments to securing market access. Agency officials did say there is some room to change specific language depending on a country’s individual needs, such as changing what level of market access would be proposed, and the timetable for phasing down barriers. However, officials also noted that taking products, sectors, or issues off the table, particularly ones such as intellectual property rights that are considered to provide a U.S. competitive advantage, generally precludes or creates an impasse in negotiations. Other countries that negotiate FTAs frequently exclude sensitive industries or issues. Some trade experts argued that USTR’s pursuit of comprehensive agreements limits potential FTA partners since a number of larger economies are unwilling to enter into such comprehensive negotiations. Administration officials recognized this and cited the EU, Switzerland, and Japan as examples of major trading partners with which an FTA with the United States could have significant commercial value but where the trading partner appears unwilling to assume obligations consistent with the objectives set out in TPA. USTR reports that it paused negotiations pursued under TPA with other large countries or subregions, such as the FTAA and SACU, in part for similar reasons. At the same time, some partners were not considered ready for FTAs because they either were not WTO members or had only recently acceded. Agency officials told us that a number of interrelated factors influenced their decision to pursue exclusively comprehensive trade agreements: Legislative requirements–Agency officials told us that TPA legislation played a large role in the decision to pursue only comprehensive FTAs. Since, under TPA, each agreement must make progress in meeting the applicable negotiating objectives prescribed by Congress, USTR only pursues FTAs in which the negotiating objectives are translated into 16 standard chapter headings, and the provisions require partner countries to pursue a number of nontariff based reforms. These include transparency in government procurement, protection from discrimination for investors, and liberalization of financial and other services. Agency officials told us that USTR has some discretion in how to pursue these objectives, but since the objectives are statutorily mandated, their discretion starts at a fairly high bar. Foreign and economic policy goals–The administration has said that pursuing comprehensive FTAs links trade policy to foreign policy and security policy goals. According to USTR, comprehensive FTAs include a number of provisions linking the trade agreement to other goals such as encouraging reform and openness, strengthening partners’ regulatory environments, and establishing the framework for promoting democracy. Furthermore, agency officials and trade experts stressed that if the United States pursued FTAs with “sweetheart exemptions” it would actually be undermining the international trading system in violation of WTO rules and regulations. Furthermore, agency officials questioned whether pursuing noncomprehensive FTAs would lead to noticeable commercial gains since trade barriers with large trading partners usually only remain in sensitive industries. Private sector input–Since all trade agreements must be approved by Congress, USTR officials told us they only negotiate agreements that they think will receive broad domestic support. As private sector representatives have identified certain “deal breakers,” which must be included in order to gain their support, USTR officials always include these topics in the FTAs they negotiate. These topics include a negative lists approach to services and inclusion of intellectual property protection provisions. Negotiating strategy–Agency officials and congressional staff involved in trade issues also told us that since the precedent that the United States only engages in comprehensive FTAs has been set, they have reinforced their credibility in insisting on future comprehensive FTAs. Partner countries also have a better sense of what the United States expects. A trade expert we spoke with added that due to the asymmetric bargaining strength of the United States compared with most of its negotiating partners given the relative sizes of their economies, USTR likely has more leverage in proposing the baseline agreement for the negotiation. On the other hand, the United States has had less success in insisting on such requirements with some large prospective partners, such as Brazil, Switzerland, and Japan. Not everyone involved in trade negotiations, however, believes that exclusively pursuing comprehensive FTAs is in the best interest of the United States. We heard from both the private sector and former congressional staff that strict insistence on comprehensive FTAs may disadvantage the United States compared with other countries that engage in FTAs more liberally. In addition, they told us that “one size does not fit all” and developing countries need help to develop before they trade with the United States. Agency officials told us, however, that due to the factors listed above, they remain convinced that pursuing comprehensive FTAs is the best policy for the United States. Furthermore, they pointed out that a prospective FTA partner’s readiness to undertake obligations which would meet TPA objectives and U.S. interests is evaluated in the selection choice. If a country or group of countries is not ready, the United States uses other mechanisms such as Trade and Investment Framework Agreements as building blocks, including capacity building. Trade with countries for which FTAs were pursued under TPA accounted for about 16 percent of U.S. trade in 2006 and about 16 percent of U.S. foreign direct investment in 2005. FTAs seek to expand opportunities for U.S. exporters in foreign markets, while solidifying the trade and investment relationship with these trade partners. Of the remaining 84 percent of U.S. trade, 27 percent was with countries for which the U.S. had an FTA prior to TPA (e.g., Canada and Mexico) and 56 percent was with countries not pursued under TPA, including the EU, Japan, and China. Of the approximately $3.4 trillion in U.S. trade in 2006, FTAs pursued under TPA accounted for about $558 billion, or 16 percent of the total. This includes exports and imports and both goods and services. About half of this trade (8 percent) was accounted for by agreements in force or concluded; the remainder was with partners with whom the United States has not yet concluded an agreement. Figure 2 shows the breakdown of total U.S. trade (exports plus imports of goods and services) across groups of trade partners. FTAs pursued under TPA accounted for a somewhat larger share of total U.S. exports (19 percent) than U.S. imports (15 percent). This pattern is reversed with non-FTA countries, which accounted for 52 percent of U.S. exports, but 59 percent of U.S. imports. Table 2 shows the share of overall U.S. trade, exports, and imports by status of FTA negotiations under TPA. In addition, countries with which the United States pursued FTAs accounted for about 16 percent of U.S. foreign direct investment in 2005 (see app. III for more information on U.S. foreign direct investment). Countries with which the United States has pursued FTAs under TPA are a diverse group. Table 3 shows the countries pursued by the status of the FTA negotiations. Concluded agreements already in force include countries in Asia (Australia and Singapore), the Middle East and North Africa (Bahrain and Morocco), and Latin America (Chile and CAFTA-DR). The concluded agreement with South Korea, for which implementing legislation has not yet been submitted to Congress, would account for the single largest individual trade partner of those pursued under TPA (about 3 percent of total U.S. trade). However, the FTAA, for which negotiations are at an impasse, would have encompassed the largest economic area since it includes Brazil and Argentina, as well as existing FTA partners in NAFTA (Canada and Mexico), CAFTA-DR, Chile, and others in the Western Hemisphere. The FTAs pursued under TPA seek a high level of liberalization. As noted previously, the United States has sought elimination of substantially all trade barriers under its FTAs in order to maximize the overall economic benefits of the agreements. While reduction or elimination of trade barriers through FTAs has been estimated to create an overall net economic benefit for the United States and its FTA partners, most economic studies find the gains for the United States of the FTAs to be relatively small compared with the overall U.S. economy. Assessments by ITC of economy-wide and sectoral effects of actual, completed FTAs also indicate positive but generally small effects on the U.S. economy and trade overall. (The U.S. FTA with South Korea is predicted to have modest effects.) However, with the exception of Singapore, given the FTA partners’ generally higher trade barriers, U.S. export gains are predicted to be larger than import increases for each FTA the ITC has assessed. For South Korea, U.S. exports are predicted to rise by $9.7 to 10.9 billion, while U.S. imports from South Korea rise by $6.4 to $6.9 billion. While FTAs require the United States to lower its trade barriers, several FTA partners pursued under TPA already had special access to the U.S. market through U.S. trade preference programs. For example, CAFTA-DR economies had preferential access to the U.S. market through the Generalized System of Preferences and the Caribbean Basin Initiative (including the Caribbean Basin Trade Promotion Act, which provided additional access). However, FTAs provide superior market access for several reasons. First, product coverage under FTAs is more complete. About 91 percent of products in the U.S. tariff schedule (for goods) are either eligible for preferential access (54 percent) or are already duty-free for most countries (37 percent). The remaining 9 percent of products are still dutiable even for countries eligible for preference programs. FTAs eliminate nearly all U.S. duties on these remaining products. Second, FTAs are bilateral agreements that provide trade partners with permanent access to the U.S. market. Preference programs are unilateral programs that need reauthorization. Lapses in authorization have created uncertainty in the past for both foreign exporters and investors. Finally, since preference programs are unilateral, U.S. exporters do not receive preferential duty-free access to foreign beneficiary markets. FTAs address this disparity. FTAs also help U.S. exports maintain a competitive advantage or counteract the advantages of third-country competitors that may already have better access to foreign markets through their own FTAs. In cases in which competitors do not have an FTA with U.S. FTA partners, U.S. exports gain an advantage over exports from competitors. The edge varies by country and product and depends on the restrictiveness of the tariff and nontariff barriers in our FTA partners’ economies. For example, in CAFTA-DR countries, the simple average tariff rate across all products ranged from 5.6 to 8.5 percent in 2006, with tariffs on agricultural products ranging from 9.7 to 13.1 percent. Non-FTA countries must still pay these rates on their exports. U.S. “most favored nation” tariff rates, which apply to all but two U.S. partners (North Korea and Cuba) that do not otherwise qualify for special, lower, rates, by comparison, were 3.5 percent overall and 5.3 percent for agricultural products. In some countries, average most favored nation tariff rates are even higher on agricultural products relative to those on nonagricultural products. For example, according to the WTO, the overall average tariff rate for South Korea on all products was 12.1 percent, but 47.8 percent for agricultural products. Table 8 in appendix III shows the simple average tariff rates (non-FTA rates) across U.S. FTA partners to provide an indication of the tariff benefits provided by FTAs. For countries that have FTAs with trade partners besides the United States, FTAs help restore the competitiveness of U.S. exports by providing comparable access. For example, Chile also has FTAs with Canada, Mexico, and the EU, as well as the United States. If the United States did not have an FTA with Chile, then U.S. exporters would be at a disadvantage relative to exporters from Canada, Mexico, and the EU. On the other hand, since FTA partners are free to enter into additional agreement with other countries, any advantage gained for U.S. exporters may be temporary. Non-FTA trade partners accounted for over half (56 percent) of U.S. trade in 2006. The remaining share of U.S. trade was accounted for by countries pursued under TPA (16 percent) and countries with which the United States already had an existing FTA (27 percent). Of the non-FTA trade partners, some of them comprised relatively large shares of U.S. trade. Table 4 shows the top 20 markets for U.S. exports and top 20 suppliers of U.S. imports among non-FTA trade partners. The largest market and supplier—the EU with its current 27 member countries—accounted for approximately 21 percent of U.S. exports and nearly 18 percent of U.S. imports. Japan, China, and Taiwan were the next 3 largest markets and suppliers for the United States, although China is the second largest non- FTA supplier after the EU. There are several reasons why the United States has chosen not to pursue some of the largest trade partners for FTA negotiations. As discussed previously, the United States seeks to include agricultural liberalization in its FTA agreements. This is a sensitive issue with the EU that is also being dealt with at the WTO and has made prospects for a U.S.-EU FTA less likely. Since trade barriers on nonagricultural products between the U.S. and EU are already very low, an FTA that did not include agriculture would have less impact. Similarly, agriculture issues are sensitive with Japan and Switzerland. However, the United States did pursue an FTA with South Korea, which also had sensitive agricultural issues but was willing to address them within the context of an FTA. Although agriculture is also a sensitive issue with China, the country also recently acceded to the WTO (December 2001), is still implementing those commitments, and has been in transition to a more market-based economy. Similarly, Taiwan has recently acceded to the WTO. After the top few non-FTA trade partners, remaining trade partners each account for about 1 percent or less of U.S. trade. Since most of the smaller non-FTA trade partners are WTO members, successful conclusion of the WTO Doha Round would still provide market liberalization. However, a Doha agreement would be less likely to completely eliminate trade barriers; FTAs provide much deeper liberalization for individual countries by eliminating trade barriers between the United States and its FTA partners. Nevertheless, various studies conclude that a Doha agreement, even though unlikely to eliminate all trade barriers, would still have a much larger impact on global—and overall U.S.—trade than eliminating all trade barriers with small non-FTA partners. Comparing countries pursued under TPA with those not pursued shows some differences in the U.S. trade and investment relationship between these two groups. Overall, the U.S. tends to (1) maintain more balanced trade with TPA countries, (2) export relatively more manufactured goods (compared with services and agriculture), and (3) have relatively faster investment growth with TPA countries, particularly in countries with FTAs in effect. While these differences do not necessarily indicate the reasons countries were chosen to be pursued, they do provide useful context for the overall U.S. economic relationships with these countries as those relationships get deepened. Appendix IV discusses these differences in more detail. Although USTR consulted frequently with Congress, some congressional staff said that both the nature of the consultations and issues such as timing of the consultations limited congressional input into FTAs. TPA requires consultations with Congress before, during, and after FTA negotiations, and records indicate consultations were extensive, particularly with the primary trade committees. The preponderance of congressional staff we interviewed viewed the consultation process as generally a good conduit for information flow from USTR. While slightly less than half of the staff we interviewed were satisfied with the quality of consultations, slightly more than half believed that the consultations did not provide the opportunity for meaningful input or influence into trade negotiations. An important element of this perception for many of these staff, particularly staff not on the trade or agriculture committees, was their view that the timing of the consultation meetings did not give them sufficient time to provide meaningful input to the negotiations. Several staff also cited situations where USTR had not fully informed them of important changes in the draft text under negotiation. Process issues of concern included the role and function of COG, selection of FTA partners, use of mock markup and the lack of a mock conference, the need to get Congress to focus on trade agreements earlier in the process, the need for additional technical information, access to USTR’s secure Web site, and the importance of congressional staff working on FTAs obtaining security clearances to facilitate the consultation process. In addition to requiring the President to consult with Congress before, during, and after trade agreement negotiations, TPA also established the Congressional Oversight Group, known as COG. COG is to be consulted at key points in trade negotiations, and its members are accredited as official advisors to the U.S. negotiating delegation. COG was designed to consult with and provide advice to USTR regarding the formulation of specific objectives, negotiating strategies and positions, development of trade agreements, and compliance and enforcement of negotiated commitments. Its meetings were with the U.S. Trade Representative. COG’s members were the Chairs and Ranking Minority Members of the Senate Finance and House Ways and Means Committees plus two majority and one minority Member from each. In addition, membership was extended to the Chair and Ranking Minority Member of each House and Senate committee that had jurisdiction over issues affected by the negotiations, including agriculture and fisheries, which were specifically designated by TPA for consultations. TPA also contained a detailed time line for required consultations, as shown in figure 3. Before beginning trade negotiations, the President must: notify Congress in writing of an intention to commence negotiations at least 90 days before initiating negotiations; consult, before, and after the submission of the notice, with the House Ways and Means Committee, the Senate Finance Committee, other relevant committees, and COG; and conduct consultations with Congress regarding agriculture, import sensitive agricultural products, the fishing industry, and textiles. During negotiations, or before entering into (signing) trade agreements, the President must: consult with the House Ways and Means, Senate Finance, other committees with jurisdiction over legislation involving matters affected by the trade agreement, and COG, with respect to the nature of the agreement, how it achieves congressional objectives set forth in TPA, and the effect the agreement may have on existing laws; report to the House Ways and Means and Senate Finance committees on any changes to U.S. trade remedy laws that an agreement would require at least 180 days before entering into the agreement; notify Congress of intent to enter into the agreement at least 90 days submit private sector advisory committee reports to Congress within 30 days of notifying Congress of intent to enter into an agreement; and provide the ITC, at least 90 days before entering into the agreement, with the details of the agreement and request that ITC conduct an assessment of the likely economic impact of the agreement; the ITC must then present this assessment to the President and Congress no later than 90 days after the President enters into the agreement. There are also consultation requirements for the period between when the President signs the agreement and when the implementing legislation is voted upon in Congress. In order for the agreement to enter into force, the President must do the following during this period: submit to the Congress, within 60 days after entering into the agreement, a description of the changes to existing laws that would be required to bring the United States into compliance with the agreement; and submit to Congress the final legal text of the agreement, a draft of an implementing bill, a statement of administrative action proposed to implement the trade agreement, and other supporting information, including a statement describing how the agreement makes progress in achieving goals set by Congress in TPA and a statement on how the agreement serves U.S. commercial interests; there is no deadline for this step. If Congress believes that the President has failed to meet these consultation requirements, it may make the implementing bill ineligible for consideration under TPA procedures by adopting a procedural disapproval resolution in both houses. In addition, Congress limits trade promotion authority by making it a time-limited authority. The most recent grant of TPA expired July 1, 2007. It could have expired 2 years earlier had Congress passed a resolution that was introduced to disapprove of its extension. Finally, TPA includes language stating that TPA procedures are rules and that Congress retains the right of either house to change the rules. In combination with the need to secure congressional approval of each agreement, these conditions all help ensure Congress’s influence over agreements. USTR had held frequent consultation meetings with Congress on FTA- related issues, as well as other topics. USTR consulted with Congress 1,605 times on FTA-related issues between the date TPA was signed into law on August 6, 2002, and the cutoff date for our analysis, April 20, 2007, according to a copy of USTR’s consultation log. Of these consultations, 1,289 were related to specific FTAs, and 316 were related to general FTA issues, such as investment provisions or agriculture issues. Consultations were primarily in-person meetings with the trade and agriculture committees but also included conference calls, particularly with the other committees of jurisdiction. Most USTR consultations (83 percent) were with staff of congressional committees with jurisdiction over trade issues. USTR met 459 times with the Senate Finance Committee and 454 times with the House Ways and Means Committee. (See fig. 4.) It met 153 times with the House Agriculture Committee and 152 times with the Senate Agriculture, Nutrition, and Forestry Committee. Thus, about two-thirds of USTR’s consultations were with these four committees. USTR also met with the other committees that had jurisdiction over the following: Fisheries–Senate Commerce, Science and Transportation and House Intellectual property, competition, and immigration–House and Senate Judiciary; Financial services–House Financial Services and Senate Banking, Housing, and Urban Affairs; Telecommunications–House Energy and Commerce and Senate Commerce, Science and Transportation; and Government procurement–House Oversight and Government Reform and Senate Homeland Security and Governmental Affairs. In addition to these meetings, 163, or 9 percent, of meetings were with individual Senators and Representatives, and 3 percent were with staff of individual Senators and Representatives. Another 2 percent of meetings were with other committees, caucuses, or congressional groups. COG met as a body nine times, constituting less than 1 percent of meetings. USTR also met with the Senate Foreign Relations Committee seven times and the House International Relations Committee four times on matters related to FTAs. A USTR official told us that the majority of meetings were open to both majority and minority committee staff, as well as to legislative assistants of Members of Congress on the committees. This is consistent with GAO analysis of the USTR logs, which showed that 148, or 11 percent, of the 1,329 meetings with staff of committees of jurisdiction were with majority members only, and 6 percent were with minority members only. This was also confirmed in our interviews with congressional staff. Current and former USTR officials told us that, for each FTA, they met with Congress throughout the process of negotiating and implementing the agreement. This was generally confirmed in our interviews with congressional staff. These officials said they met with Congress before negotiations began, before each negotiating round (with more congressional meetings in the later rounds of each FTA), before signing agreements, during congressional consideration of the FTA implementing legislation, and during FTA implementation. They said that they provided the classified negotiating text to the staff with security clearances on the trade and agriculture committees in advance of each round and discussed it at the consultation meetings. These officials said there was ample opportunity for committee staff to provide input during the negotiations and that they valued the insights they gained as to what was important to the committees. USTR officials said that they had never turned down a request for a briefing and believed that they had fully consulted with Congress. One former negotiator said that they could not conceive of a way that USTR could do more consultations than it does now and that consultations were both extensive and substantive. As required by TPA, USTR developed guidelines for COG in consultation with Congress that established notice, consultation, and reporting requirements for agreements negotiated under TPA. USTR officials said that in developing these guidelines they consulted with the Senate Finance and House Ways and Means Committees and got their input. These guidelines provide that, in the course of negotiations, USTR will consult “closely and on a timely basis” with COG and all House and Senate committees with jurisdiction over laws that would be affected by an agreement. To verify that consultations occurred before, during, and after negotiations, we analyzed consultation patterns for two agreements. Figures 5 and 6 show the number of USTR consultations with Congress on CAFTA-DR and on the Australia FTA over time in relation to key points in the negotiation and implementation process. We also analyzed the total number of congressional consultations on FTA- specific and FTA-related topics and found they varied over time. (See fig. 7.) There were more consultations when more FTAs were under negotiation at the same time. There were also more consultations in the 1 ½ years after TPA passed in 2002, when model text for each of the 16 standard FTA chapters was being developed. Current and former congressional committee staff on key committees with jurisdiction over matters covered by FTAs provided us with their views on a range of issues related to the FTA consultations. These issues included the nature and extent of consultation meetings, as well as how well they met their expectations and needs. From August 2002 to April 2007, the trade committees (Senate Finance and House Ways and Means) generally had weekly consultation meetings with USTR officials that often lasted an hour to an hour-and-a-half. Sometimes two or even three such meetings were held back to back on the various FTAs being negotiated. Typically, the USTR lead negotiator and members of the FTA negotiating team would meet in person with the committee. Occasionally, the USTR staff were joined by staff from other agencies, such as the Departments of Agriculture or Commerce. Generally, these meetings were bipartisan, with both majority and minority professional committee staff invited, as well as the responsible legislative assistants of the Senators or Representatives that were members of the committee. Trade committee staff said that most consultation meetings were held in person. Some were conducted through a conference call, which was usually shorter. Generally, there were more conference calls at the end of the negotiations, when the USTR negotiators were more pressed for time or were overseas at negotiating sessions and calling back to update the committee staff on progress. Some of the trade committee staff we interviewed commented that in-person meetings were much more useful, although they understood the need for conference calls. The trade committee staff we spoke with said that consultations generally took place before and after each negotiating round. Before each round, USTR provided the confidential text that it was going to table in the negotiations with the FTA partner country. Staff with security clearances had access to this text; staff without clearances received more general information. After each negotiating round, USTR updated congressional staff on the issues that had been raised, the progress made, and what they thought the next round would bring. The agriculture committees had consultation meetings with USTR approximately every 2 weeks, with the Department of Agriculture generally accompanying the USTR negotiators, according to a committee staff person. Otherwise, the descriptions of the consultation meetings were mostly similar to those of the trade committee staff. In contrast, the other committees of jurisdiction generally had a much more limited experience with consultation meetings than the trade and agriculture committees, which are the main committees of jurisdiction regarding FTA issues. Generally, the issues involved in the FTA negotiations were not priority issues for these committees, and many of them had a much more limited understanding of the proceedings. Most did not have clearances and received more general descriptions of provisions that would be negotiated, since they were not cleared to receive actual negotiating text. The number of consultation meetings for these committees was substantially less than for the trade and agriculture committees and varied from once a month to once for each FTA. Almost all of the committee staff we interviewed said that USTR provided high-quality information that provided them with insight into the progress of the negotiations. In this respect, they met the expectations of these congressional staff for information related to the FTA negotiations. These staff felt that the briefings were very well done. They also praised USTR’s willingness to answer questions and follow up on particular issues of interest. The general view was that USTR was very responsive in answering questions and providing follow-up information. In terms of satisfaction that the consultation meetings provided an opportunity for input or influence on the trade negotiations, however, the committee staff we interviewed were fairly evenly divided. Slightly less than half of the congressional staff we interviewed felt that the consultation meetings had met their expectations in this respect as well. They were satisfied that they had been fully briefed, and the USTR negotiators had listened to their views. They indicated that they knew that USTR could not always obtain the results their committee or their Member of Congress wanted, but felt that their views had been taken into consideration. However, slightly more than half of the congressional committee staff with whom we spoke felt that they did not have any real input or influence on the trade negotiations. For these staff, USTR’s consultation meetings had not met their expectations because they had not provided an opportunity for a two-way exchange of information that the staff considered a true consultation. One committee staff person appeared to reflect the views of these staff in characterizing the consultations as a good conduit for information flow from USTR, but not as a good forum for working together and developing policy jointly. Others characterized the meetings as helping them feel well-briefed, but not consulted. Among these staff, several said they felt that USTR was “checking the box” in their meetings with them. At the same time, among the staff who indicated that the consultations were more of a one-way briefing than a two-way consultation, several were on the other committees of jurisdiction and said they did not have expectations of more. They said that they were satisfied with receiving briefings because this was not a priority issue for them or because they did not expect to influence the negotiations. Overall, on this issue, the degree to which the committee staff we interviewed felt that they had input or influence on trade negotiations varied across parties. In particular, Republican staff (which was in the majority in Congress for nearly all of the TPA period) generally had more positive views about their input and influence than Democrats. We also found mixed views among the staff we interviewed on whether the timing of the consultations gave sufficient time for staff to provide meaningful input. Most, but not all, of the staff of the trade and agriculture committees said the timeliness of consultations was good. However, staff from the other committees of jurisdiction often said that the consultations were not timely and cited this as a reason that they felt briefed rather than consulted. They said that they generally weren’t briefed and given information until the last business day before the negotiators were leaving for the next round. This did not give them enough time to fully consider the information, consult with their committee or Member to develop a response, and give feedback that USTR would have time to consider. These staff felt strongly that one way to achieve more meaningful congressional input was to allow more time for feedback by having earlier consultations and by providing them with text or other information in a more timely manner. In addition, staff of one of the primary committees of jurisdiction complained of last-minute consultations on some of the more controversial issues. While they understood that the interagency process took time and that USTR was moving as quickly as possible, they felt that if congressional consultation was meant to be meaningful, USTR could either build in the time needed for congressional consultation or delay tabling the controversial text at the next negotiating round to allow time for congressional input. Among the committee staff who had expressed satisfaction with the consultation meetings, several noted that the style of the briefer was important. In some cases, the briefers tended to keep the briefing short and let committee staff ask questions. The staff we spoke with said that if they asked a question, the briefers would answer it fully. But, if staffers didn’t know what to ask, they were at a disadvantage in obtaining pertinent information. They said that most staff depended on the briefers to let them know about issues of concern. This was very important to them. It was much more helpful when the briefers provided the context and alerted them to any changes in the text or any areas of concern developing in the negotiations. Several other committee staff, who were dissatisfied with the consultations, expressed a much more negative view about briefers’ willingness to share information. While they agreed that the information USTR provided was generally of good quality, they said there were instances when, in their opinion, USTR deliberately did not offer information on changes to the negotiated text that would be of concern to staffers, unless they asked specific questions, which they often did not know to ask. One committee staff said that this had been the case, for example, with the U.S.-Korea FTA, in which significant changes had been made to the investment chapter. Although the committee staff had received the amended text, USTR did not mention that changes had been made to a sensitive provision in the Expropriation Annex, which the committee staff said was extremely controversial—to the point that the language in the text had been carefully worked out in the 2001-2002 time period and then never touched again. The staff person said that this text was considered to be “set in stone,” and any change to it clearly merited mention by USTR. With the press of business during consideration of the U.S.-Korea FTA, the committee staff had not realized that it had been changed, and they didn’t learn about it until they were alerted by the private sector. The committee staff who were dissatisfied with the consultations also said that there had been instances when it appeared USTR had withheld information. For example, several committee staff mentioned that a controversy related to the Australia FTA pharmaceuticals benefits scheme resulted from USTR withholding information. Again, committee staff found out about the controversial provisions from the private sector when the text was made public. In another example, some committee staff said that USTR had not adequately briefed Judiciary Committee staff on the H1-B visas issue with the Chile and Singapore FTAs. While there was disagreement among the staff who commented on this as to whether USTR had adequately briefed the staff or withheld critical information, a lack of clarity in the consultations did result in the Judiciary Committee being highly upset about this issue. (As a result of the Judiciary Committee’s views, USTR significantly modified its objectives regarding immigration. Subsequent agreements have either included a side letter stating that the agreement has no effect on U.S. immigration law or policy or, in more recent agreements, this type of provision has been included in the text of the agreement.) These staff felt strongly that in order for the consultation process to work, Members of Congress and committee staff need to know that USTR will always make a good faith effort to tell them when substantive changes to the model text have been made in the negotiations. In FTAs, specific details that are negotiated are critical to the outcome. Congressional staff also expressed concerns about the consultation process, including the usefulness of COG, the congressional role in FTA partner selection, the role of mock markup, the importance of earlier congressional focus on FTA negotiations, the need for greater access to technical information, and problems with access to USTR’s secure website. Most of these issues focused more on internal congressional matters than on USTR. In addition, USTR also stressed the importance of congressional staff working on FTAs to obtain security clearances to facilitate the consultation process. COG was a new mechanism under TPA intended to draw Members of Congress into the consultation process, particularly members from nontrade committees, and to provide them with a private and confidential opportunity to have a consultative and advisory role in trade policy, according to a committee staff person familiar with its creation. The staff person went on to state that COG was also meant to provide greater transparency and inclusiveness to the trade policy consultation process. After it was launched in September 2002, COG was convened only nine times before TPA lapsed in July 2007, according to the USTR consultation log. COG’s record drew mixed reviews. Some trade committee staff had a positive view of COG, saying that it had been a useful forum for input on FTAs, including FTA selection, or that it was worthwhile because it had provided a mechanism for transparency. However, most trade and agriculture committee staff said it had been of limited usefulness and had not functioned well. These staff said that COG was not well attended, particularly after the first few years. While the trade committee members continued to attend regularly, few others did. Some trade committee staff said that the separate committee executive sessions with their Members were more useful than COG. Most staff outside of the trade and agriculture committees with whom we spoke were unfamiliar with COG or unaware it existed; those staff who were familiar with COG did not find it to be useful. USTR officials and committee staff noted that it was difficult to schedule meetings around the busy schedules of Members of Congress. One committee staff said that it had been difficult to schedule attendance by the Member because of short notice for the COG meetings, pointing out that it would be helpful if COG meetings were put on a regular schedule. Two committee staff said a limitation of COG was the requirement that staff could only attend with their Member, so they could not cover meetings the Member could not attend. Another committee staff said that COG meetings should not be scheduled solely at the discretion of the majority staff but also by the minority in order to protect minority rights. Several committee staff described the COG meetings as formalities, particularly as time went on. One staff of a Member on a trade committee, but not on the COG, said that they had resented being excluded from this trade policy-making forum. Most congressional staff we interviewed who had a view on this issue felt that their committee did not have any meaningful input into the selection of FTA partner countries. However, there was substantially more awareness and concern about this issue among the trade and agriculture committee staff than among staff of other committees of jurisdiction. Among those concerned about partner selection, the primary concern seemed to be that so many smaller trading partners were being selected for FTA negotiations, rather than larger trading partners with greater commercial and economic significance. One committee staff commented that, increasingly, every congressional vote for an FTA was a difficult vote that involved using up significant political capital. While Members supporting free trade had no problem in principle with negotiating FTAs with smaller countries for foreign policy or other reasons, if Members were going to be expending significant political capital, they wanted it to at least be economically and commercially beneficial. Another committee staff said that the selection of FTA partners and dialogue about it with Congress should be more transparent and that the reasoning behind the choices of FTA partners and the complicating factors should be openly discussed. Some staff said that in TPA the role of input into FTA partner selection had been given to COG, rather than to committee staff. Although this role was informal, some committee staff and USTR officials said that USTR took COG’s advice on selection seriously and that some potential partners supported by COG had been pursued by USTR. They cited the U.S.-Korea FTA as an example. A few committee staff favored restoring the gatekeeper provision, which was part of prior fast track legislation but was dropped when TPA was passed in 2002. The gatekeeper provision had required the President to notify Congress and give it an opportunity to disapprove launching of negotiations with a particular partner. These staff felt that restoring it might be beneficial in terms of potentially generating greater buy-in to the FTAs selected for negotiation. Generally, only trade committee staff were aware of the gatekeeper provision. Those opposed did not see any value in it given COG’s role in discussing potential FTA partners. The former USTR negotiators with whom we discussed this issue also opposed it. They were particularly concerned about the potential effects of any requirement for an affirmative vote for launching FTA negotiations with a trade partner country because it would mean that Congress would have to vote twice for each FTA and it would force a vote before anyone knew what the actual benefits from the FTA would be. Most trade and agriculture staff we interviewed were familiar with the mock markup process—the informal committee process to “mark up” or amend the draft implementing bills for FTAs. Most trade staff said that it was an important part of the consultations process for TPA. Committee mock markups are generally the only opportunity Congress has to offer amendments to the proposed FTA implementing bill. However, while some were concerned that the mock mark-up process had not been used effectively, others were concerned that it could be misused in order to delay consideration of FTAs or to introduce inappropriate last-minute provisions that should have been addressed during the negotiations. Some also expressed concern that the trade committees had not scheduled mock conferences when the House and Senate had adopted differing mock amendments. They said that a mock conference was an important part of the consultation process. Some of these staff cited the case of CAFTA when the House and Senate versions of the draft implementing bills differed because the Senate Finance Committee and the House Ways and Means Committee had recommended different mock amendments. They said the two committees did not hold a mock conference and the administration chose the version it preferred, the House version, ignoring the Senate Finance Committee amendments. Other staffers said that complex multilateral negotiations like those of the WTO would need a mock conference, but that FTAs were simpler and a mock conference was often unnecessary and time consuming. Some committee staff felt that an inherent problem with the consultation process was that Congress tended to focus on the FTAs at the end of the negotiations, when the deal was essentially done, and it was difficult (if not impossible) to change the terms of the agreement. They said that this resulted from the congressional culture of waiting until an issue was fully developed and likely to become law before focusing on it. In contrast, they said that trade negotiations particularly require congressional attention throughout the process. For consultations to be meaningful and most effective, they felt that it would be important to find ways through the consultation process to facilitate Congress focusing earlier on the FTAs. This was particularly critical given the nature of fast track provisions, in which the final agreement comes to Congress for an up-or-down vote with no amendments. USTR officials, including some former lead negotiators who we interviewed, also said that earlier attention by Congress was important. Some of them expressed frustration that they would hold frequent consultation meetings, but that many committee staff would not attend, or would not actively engage. Then, at the end of the process, when the negotiations were finalized, they would start to focus and ask questions and want changes. This was very ineffective—sometimes USTR was able to get changes, but often it was no longer possible to modify something that could have been changed earlier in the negotiations. Another issue raised by several congressional staff was the need for greater access to technical information on an ongoing basis. These staff said that although committee staff on the trade and agricultural committees are knowledgeable about their fields, trade negotiations are today too broad and complex for any one staff member to fully understand all of the implications. One trade staff told us that staff on the other committees of jurisdiction are at a disadvantage because trade is not their primary issue, and they don’t have time to follow it. Having access to expert staff, such as through a congressional trade office, would be very helpful, according to one committee staff. Another committee staff opposed what they feared might be creation of an additional bureaucracy with a new trade office and instead said that GAO could serve this role. In principle, the formal private sector trade advisors could help fill this void. However, committee staff said that they did not have contact with them during the FTA negotiations. One staff said that they used to be invited to trade advisory committee meetings, but no longer. Although the trade advisory committees provide extensive technical information to Congress in their required reports on each FTA at the end of the process, committee staff did not have access to their substantial knowledge base during the negotiations. A related issue raised by a few staff on some of the nontrade committees of jurisdiction was that trade negotiations involve a great deal of specialized terminology and information. Staff of one committee said that sometimes they found it difficult to fully understand the briefings because the negotiators used so much jargon. They said that it would be helpful if USTR developed a primer describing the typical evolution of the trade negotiations process and providing a glossary of trade terms. Other ideas included USTR providing an overview on upcoming issues at the start of the year, giving more of an overview on FTAs early on, and describing in some detail FTAs at their conclusion. An issue raised by many of the trade and agriculture committee staff that we interviewed was access to USTR’s secure Web site on which it posts the negotiating text for FTAs, as well as other information. The Senate staff said that this was more a matter related to internal congressional security issues than to USTR. Until this year, the Senate committee staff that have access to the classified negotiating texts said they received hard copy information because the Senate was unable to resolve security concerns to allow electronic access. When USTR sent a hard copy to the Office of Senate Security, it took the office a day to log it in and notify staff of its availability. Then staff had to make an appointment to go to a secure room in the Capitol in order to read these documents. The result was that they had a significantly smaller window of time to access the documents than if they had been immediately available electronically. Staffers said that recently a computer in a Senate office building had been made available for this purpose. While this was an improvement, they would prefer to have access in their own offices, or at least their own buildings. On the House side, committee staff did not have any electronic access to USTR’s secure Web site, as of the end of August 2007. However, several committee staff said that access was being planned and would greatly improve timely staff access to negotiating information. USTR officials said that they would welcome expansion of congressional access to USTR’s secure Web site. They also said that an important related issue is whether congressional staff working on FTAs under negotiation had security clearances. USTR officials felt strongly that if more congressional staff obtained security clearances, it would greatly facilitate the consultation process, both in terms of access to information and timeliness of information. The trade advisory committee chairs we contacted said that USTR and the managing executive branch agencies consulted with their committees on a fairly regular basis, providing access to administration officials, but process issues made it difficult for some committees to function effectively. In addition to consultations with Congress, the administration is required to consult with private sector advisory committees and with the public at large to get a sense of their views. We spoke with 16 chairs of the relevant 27 trade advisory committees, as well as five additional committee members. They reported that consultations were generally extensive in number. The chairs and members, however, had mixed reactions as to whether the nature of the consultations, quality of information provided, and feedback received were satisfactory. Furthermore, process issues such as reporting time frames, committee composition, and chartering and appointment sometimes impeded advisory committees’ ability to provide advice on trade negotiations. Four agencies, led by USTR, administer the three-tiered trade advisory committee system. (See fig. 8.) USTR directly administers the first tier overall policy committee, the President’s Advisory Committee for Trade Policy and Negotiations (ACTPN), and three of the second tier general policy committees, the Trade Advisory Committee on Africa (TACA), the Intergovernmental Policy Advisory Committee (IGPAC), and the Trade and Environment Policy Advisory Committee (TEPAC), for which the Environmental Protection Agency also plays a supporting role. The Department of Labor coadministers the second tier Labor Advisory Committee (LAC) and the Department of Agriculture coadministers the second tier Agricultural Policy Advisory Committee (APAC). The Department of Agriculture also coadministers the third tier Agricultural Technical Advisory Committees (ATACs), while the Department of Commerce coadministers the third tier Industry Trade Advisory Committees (ITACs). Ultimately, member appointments to the committees have to be cleared by both the Secretary of the managing agency and the U.S. Trade Representative, as they are the appointing officials. USTR and the relevant executive branch agencies consulted with the first and third tier advisory committees on a fairly regular basis. The first and third tier chairs we contacted generally felt that these consultations provided the committees with important access to the administration and ongoing negotiations. From fiscal year 2002 through May of fiscal year 2007, USTR met with the 16 ITACs a total of 729 times. From fiscal year 2002 through fiscal year 2006, USTR met with the six ATACs a total of 92 times. Most of these meetings were in person, although conference calls were sometimes held for fast-moving issues or during the 30-day time frame for report writing. In addition, USTR established a monthly conference call for all trade advisory committee chairs, beginning in late 2002. The number of consultations with USTR was more limited at the second tier policy committee level. Although USTR has met fairly regularly with APAC and TEPAC over the past 5 years, the LAC had no meetings for over 2 years from September 2003 to November 2005. Furthermore, IGPAC did not have an in-person consultation with USTR from July 2005 to September 2007. In late 2006, USTR instituted a monthly conference call for IGPAC, together with state points of contact. Agency officials said this was done to broaden outreach to the states and increase the frequency of interaction with USTR without travel costs. The officials added that they have also convened additional IGPAC conference call meetings, as needed on particular issues. LAC and TEPAC (as well as ACTPN) have liaison groups that meet more often. For example, the TEPAC liaison group tries to meet every 4 to 6 weeks. According to members from these committees, liaison meetings are at the staff level and are usually fairly technical, whereas the principals’ meetings tend to look at broader, political issues. Slightly over half of the committee chairs we interviewed felt that their expectations of the consultation process were met, but overall views on the opportunity to provide meaningful input varied. For example, one third tier chair said that his expectations were met since the process works well to facilitate access between negotiators and private sector representatives, and the administration seems to take consultations seriously. The second tier committees in particular, however, stated that their advice and opinions were not considered. A few of the third tier committees concurred. Those who said their expectations were not met told us their committees were not being used properly. According to a few of these chairs, while the administration has consultation meetings with them, they are more to “check off the box” than to engage in meaningful dialogue. The chairs feel that the administration tells them what has already been decided upon instead of soliciting their advice. Furthermore, two ITAC chairs told us that it is more effective to use venues other than the advisory committee system to provide meaningful input. For example, one chair said that a coalition of industry-related companies outside of the ITAC is the major venue for consultations with the administration for that industry. The chair told us that the ITAC advisory process tends to be at the end of negotiations and is not as significant as it should be. At the same time, the chair felt the ITAC did play a role in the consultation process. Although it could not consult at the highly technical level that the coalition could, it was able to consult on the broad direction of U.S. trade policy for that industry. USTR officials told us that the fact that the advice of any particular advisory committee may not be reflected in a trade agreement does not mean that the advice was not carefully considered. USTR emphasized that it does consider advice from its advisory committees in formulating U.S. trade policy. At the same time, however, USTR also acknowledged that for some contentious issues, the advice is not in line with long-standing U.S. policy or congressional guidance set out in TPA. In those instances, USTR told us they are very limited in what they can do in response to advisory committee advice. This appears to be particularly problematic for second tier policy advisory committees. For example, the strength and reach of FTA investment provisions and dispute settlement mechanisms have long been a concern of both IGPAC and TEPAC. LAC, meanwhile, has criticized the worker rights standards and dispute settlement mechanisms in FTAs as insufficient. Overall, the first tier and most of the third tier committee chairs we interviewed felt that the information USTR provided was of high quality and detail, providing a mixture of publicly available information and more proprietary, confidential information. Most of the second tier policy committee and a few third tier technical committee chairs in our selection, however, were not satisfied with the quality of information presented during consultations and felt that it was no better than information available to the general public. Of those committees, one chair felt USTR was constantly holding back information, and the committee learned something new only every seventh or eighth meeting. Another chair expressed frustration at trying to get information as negotiations were in progress, saying that USTR was reluctant to state what the other country was proposing. Two of the chairs who were dissatisfied went on to say that although most of the information presented is available publicly, having access to administration officials was valuable. Several other committee chairs also emphasized the value they place on having access to the administration through the advisory committee process. Approximately half of the advisory committee chairs with whom we spoke felt that the administration was responsive to their advice and provided feedback, whether or not their advice was incorporated into the agreement. The first tier and over half of the third tier committee chairs feel there is an adequate opportunity for dialogue and that their interests are considered. Most of the second tier and a few of the third tier committee chairs, however, expressed dissatisfaction with the feedback from USTR. They expressed their perception that USTR is either biased against their committee or that by being asked to comment on completed deals, their opinions are not truly valued or taken into consideration. Two chairs said USTR wants them to “rubber-stamp” decisions or to be “cheerleaders” for the administration. Other chairs said their committees rarely or never get feedback. In general, the advisory committee chairs we spoke with were pleased with the numerous changes that have been made to the advisory committee system in response to GAO’s 2002 report. In particular, members found the secure Web site very useful. A quarter of the chairs said that having text on the Web site sooner, or when USTR says it will be posted, would be helpful, but they agreed that the secure Web site was a valuable tool. Three-quarters of the chairs we interviewed had no complaints about the reconfiguration of the committee system to more closely align with the current U.S. economy, although chairs and members from slightly over a third of the committees we interviewed found problems with the representation of interests on their individual committees. Ten of the 16 chairs with whom we spoke did not find the monthly chairs’ teleconference call useful, primarily because of a lack of detailed information; those chairs located in Washington, D.C., cited lack of new information. Furthermore, 8 of the 11 chairs we interviewed whose committees are invited to the newly instituted periodic plenary meetings (ATACs and ITACs) did not find them useful. A couple of those chairs did acknowledge, however, that their out-of-town members might find them more useful and that they are a good opportunity to hear cabinet-level speakers to whom they would not routinely have access. Beyond the plenary meetings, several chairs, particularly among the ITACs, said that more interaction with other advisory committees would be useful. Currently, only three ITACs (Customs Matters and Trade Facilitation, Intellectual Property Rights, and Standards and Technical Trade Barriers) allow for members from other ITACs to sit in on meetings in a nonvoting capacity. There is also an Investment Working Group that draws from across the ITAC committees that a couple of chairs said was a helpful device. Stakeholders outside of the trade advisory committee system were also provided an opportunity to express their views on the record through the public hearing process; however, they have found other methods to be more effective. The administration holds public hearings and gives the public an opportunity to submit written comments for each FTA. Anyone is free to come to these meetings and express their opinions. We spoke with three of the former Assistant U.S. Trade Representatives who were in charge of negotiating FTAs over the past 5 years under TPA, and each said that the public hearing process was useful and gave USTR a good overall sense of what issues were important to the general public. They noted that they sometimes gained information from viewpoints not represented in the formal system and that comments were distributed to responsible officials and taken into account. While we did not speak extensively with stakeholders that used these formal and informal avenues for input, we spoke with a few trade experts in the nongovernmental organization and academic communities that had used them or were familiar with them. The experts from the academic community admitted that although they were aware of the public hearing process, they did not participate in it. Those in the nongovernmental organization community, however, had either personally participated or their organization had, but they did not feel that their opinions were heard. Furthermore, they felt left out of the process and that industry groups had much better access. As a result, these groups said they have to go directly to Congress to express their opinions through hearings or personal contact. Despite the frequency and quality of USTR consultations with the advisory committees, process issues such as short reporting time frames, lack of transparency in committee composition, and delays in rechartering committees and appointing members sometimes impeded committees’ ability to provide trade advice. The Trade Act of 1974 requires trade advisory committees to provide to the President, the Congress, and USTR a report detailing their advisory opinion as to (1) whether and to what extent the agreement promotes the economic interests of the United States and achieves the applicable overall and principal negotiating objectives (for first and second tier committees) and (2) whether the agreement provides for equity and reciprocity within the sector or within the functional area. TPA legislation gives the advisory committees 30 days after the President notifies Congress of the intent to sign a trade agreement to submit these reports. Approximately half of the committee chairs we interviewed said that this deadline can be difficult to meet for both technical and logistical reasons, and the committees cannot always give advice based on a thorough review. Reasons they gave include the following: FTAs are technical, complex documents including thousands of lines of tariffs. Advisory committee members are volunteers with full-time jobs and other commitments. Coordinating the FTA review and report within 30 days can be a challenge. The text is sometimes not available until several days into the 30-day period. Negotiations are not always finalized for all sectors at the same time, and the posting of various chapters is staggered. Although committee members see versions of the text as the FTA develops, the final agreed-upon text can change the implications for their particular interest significantly. The FTA with South Korea provides the most obvious and recent example of presenting a challenge in meeting the deadline. Chairs told us the text was not available to their committees until between 7 and 14 of the 30 days had passed. Furthermore, although some issues such as rice had been agreed upon in principle between the United States and South Korea at the conclusion of the agreement and the advisors had been briefed on the results, the final text had not yet been written. According to administration officials, the FTA with South Korea was an exception, since USTR was rushing to finish negotiations before TPA expired. Committee chairs told us, however, that meeting the 30-day deadline has been difficult for other FTAs as well. One of the second tier policy committee chairs, for example, noted that the committee did not have access to the agricultural sections of the final text of the Colombia FTA in time to complete the review prior to issuing a committee report. The committee therefore had to submit a pro forma letter, noting that they would provide a more detailed addendum to their report after the full text became available. A third tier committee chair told us that his committee regularly reserves the right to amend its report. USTR officials acknowledged that the time frame for report writing has been problematic for years. Furthermore, they pointed out that as USTR is actually tasked with sending all of the committee reports within 30 days to Congress, they need at least a couple of days to collect reports from the various committees, make copies, and then send them by courier. It is also difficult for the ITC to provide in the specified time frame its statutorily required report assessing the likely impact of the agreement on the U.S. economy and specific industry sectors because of delays in receiving the final agreement text. The President is required to provide the ITC with the details of the agreement, as it exists at that time, 90 days before the date on which the President enters into the agreement. The ITC has a total of 180 days from that date to hold any hearings, do its analysis, and submit its report. TPA also requires the President to update ITC on the details of the agreement during this period. According to ITC officials, the deadline is often difficult to make due to last minute changes and late delivery of the final text. These officials told us that ITC sometimes does not get the full text of the agreement and all of the annexes until they are already more than halfway through the 180-day period. The ITC officials agreed with advisory committee chairs who suggested that a longer report writing window would be useful. One committee chair specifically suggested extending the window by 15 days. Commerce and USTR officials agreed that they would like to see at least 15 more days allowed for report writing. The represented interests on trade advisory committees are not always transparent. Congress requires, through the Trade Act of 1974, that the President seek information and advice from representative elements of the private sector and the nonfederal government sector through trade advisory committees that include representatives of certain interests. For example, the first tier ACTPN is to include representatives of nonfederal governments, labor, agriculture, small business, environmental and conservation organizations, and consumer interests, among others. The third tier committees are to be representative, insofar as is practicable, of all industry, labor, agricultural, or service interests in the sector or functional areas concerned. After we reported in 2002 that the committee system’s structure needed to be revisited, USTR and managing agencies worked with Congress in reconfiguring some of the committees. For example, the LAC membership now includes primarily union presidents to ensure that the administration receives advice from the highest levels. Furthermore, the 21 industry functional and sector committees were realigned and streamlined into 16 industry committees to more accurately reflect the current U.S. economy and trade policy needs. USTR and the other managing agencies, however, still have had difficulty incorporating nonbusiness stakeholders into the committees. For example, USTR said it has had difficulties finding labor representatives willing to serve on ACTPN, the overall policy first tier committee that is required to be broadly representative of key sectors and groups affected by trade. Just under half of the committee members with whom we spoke expressed frustration with the current composition of their committees. Members who were dissatisfied with representation told us either that they felt that certain relevant viewpoints were not adequately represented or that the composition favored representation of one industry or group at the expense of another. Furthermore, some members are the sole representative of a nonbusiness interest on their committee. The nonbusiness members we spoke with told us that although their interest is now represented, they still feel isolated within their own committee. The result is the perception that their minority perspective is not influential. Available public information makes it difficult to determine what perspective or interest a committee member represents. For example, USTR officials pointed to the charters of the committees for which USTR is the principal administrator for guidelines as to which representatives they select. The charter for TEPAC, however, simply says that members shall be from environmental interest groups, industry, agriculture, services, nonfederal governments, and consumer interests, and that they shall be broadly representative of key sectors and groups with an interest in trade and environmental policy issues. The Department of Labor’s charter for LAC only says members will be selected from the U.S. labor community. In addition to charters, the Departments of Agriculture and Commerce also put out Federal Register notices soliciting new members. These notices stipulate that members must have expertise and knowledge of trade issues relevant to the committees and that geographic, demographic, and sector balance will be sought. Neither the charters nor the Federal Register notices, however, explain how the agencies actually determined which representatives they placed on committees, although these are the documents agencies continually referred us to for this information. Without reporting such an explanation, it is not transparent how agencies followed their own guidelines for member selection or met statutory representation requirements. It is also not always transparent from the final roster which interest a particular member represents. The FACA required the President to report annually on the status of advisory committees, although this requirement was terminated in 2000. The General Services Administration now collects this information from the relevant executive branch agencies and posts it on the FACA database (a publicly available database on committees operating under FACA). While the Department of Commerce reports on the specific interest each committee member represents, USTR and the Departments of Agriculture and Labor do not. Instead, they list the member’s occupation or affiliation. However, it is not always possible to deduce from that information a member’s represented interest, as for example, several committee members are from law firms or large companies that deal with a variety of issues. Listing the name of the firm or company alone does not necessarily indicate representation of a particular interest. As a result, it is difficult to determine whether USTR is receiving the information and advice Congress intended it to obtain from these committees. Weaknesses in the processes of rechartering and repopulating committees have caused significant lapses in committees’ functions. Originally, FACA called for the termination of advisory committees every 2 years unless renewed or its duration is otherwise provided for by law. Legislation passed in 2004 in response to our 2002 report leaves it to the discretion of the President whether or not to extend the charters of the trade advisory committees established under the Trade Act of 1974 to 4 years. All of these committees, with the exception of LAC, now have 4-year charters. Department of Labor officials told us this is because of miscommunication surrounding the 2004 legislation. Charters of several committees have been allowed to lapse recently, however, resulting in committees not being able to meet for extended periods of time (up to 7 months in the case of LAC). Furthermore, the process of selecting and appointing committee members requires a number of time-consuming steps. The Department of Commerce, for example, starts the process of appointing new members approximately 9 months prior to the ITACs’ charter expiration dates, to try and ensure that the work of the ITACs does not stop, and has been successful in avoiding lapses as a result. However, other agencies do not always start this process in time for committees to begin meeting once the charter is renewed. When both processes of rechartering and member appointment are delayed, it further reduces a committee’s ability to give timely, official advice before the committee is terminated, and the rechartering process has to begin again. This is particularly true in the case of LAC, which still has a 2-year charter. These periods of committees not being able to meet have occurred during important stages of the U.S. trade agenda for both bilateral agreements and the WTO. Most recently, the charters of APAC and all six of the ATACs expired on April 29, 2007. The Department of Agriculture began the process of soliciting new members on March 20. Although the committees were rechartered in late May, as of late September 2007 they had still been unable to meet because they had not yet been repopulated. A Department of Agriculture official told us that this is because key people responsible for the vetting process in the undersecretary’s office have been unavailable due to travel schedules. In the interim, however, the United States signed FTAs with Panama and South Korea on June 28 and June 30, respectively. Although these committees were able to get their reports on the two FTAs to USTR just before their charters expired, they have not been able to give any official advice in the interim period, when agricultural issues— particularly rice in the FTA with South Korea—were still being negotiated. In another example, the LAC did not meet from September 2003 until November 2005. Department of Labor officials indicated this was due in part to the difficulty in getting members vetted and appointed. During this more than 2-year period, the United States was not only negotiating in the Doha Round of the WTO, but was also negotiating FTAs with numerous countries. The administration, however, is not required to report such lapses and the reasons behind them. The FACA database does collect data on the length of the current charter and the number of meetings held each year. This information, however, is only reported on an annual basis, and we found several discrepancies in the data posted, including incorrect charter and meeting dates. TPA expired on July 1, 2007, but the issue of its renewal awaits congressional consideration. This report reviews what FTAs the administration pursued under TPA. The systematic review this report provides forms part of the historical record of what was achieved with this important grant of authority. This report also examines how well the congressional and private sector consultations worked in practice. Although these are considered an essential check to ensure substantively sound and well-supported agreements, our report finds room for improvement. Under this TPA authority, we found USTR has pursued bilateral and subregional FTAs in order to advance both foreign policy and economic policy goals and as building blocks to larger regional initiatives and global trade expansion. While many in Congress and U.S. industry have supported these FTA negotiations, some have been concerned about the limited economic and commercial benefits gained. However, the U.S. standard of only negotiating comprehensive FTAs has had implications for the universe of suitable trading partners. Certain larger trading partners like the EU and Japan have been unwilling to open up sensitive sectors such as agriculture bilaterally. Negotiations with some larger developing country partners such as Brazil were ultimately abandoned, in part because they were unwilling to accept the comprehensive template proposed by the United States on such topics, as well as intellectual property rights and services. The results in terms of trade coverage illustrate the limitations of pursuing comprehensive FTAs: those in force or concluded under TPA accounted for just 8 percent of total U.S. trade. Yet, after the EU, Japan, and China, the trade partners that remain to be covered by FTAs each account for relatively small shares of U.S. trade. TPA required that the administration consult with Congress as USTR negotiated trade agreements. We found that USTR provided extensive consultations on FTAs, numbering well over a thousand, over the past 5 years—a significant expenditure of effort, resources, and time for an office of about 200 staff. However, while some current and former congressional committee staff we spoke with were satisfied with the consultations, others still came away feeling that they had not been truly consulted, particularly staff outside of the trade committees. Current and former USTR negotiators we interviewed believed that congressional input was constantly being factored into their discussions, but said lack of early focus by Congress on agreement details often complicated USTR’s ability to incorporate congressional input. Clearly, clarification of expectations on both sides is essential to any renewal of TPA. Certain procedural issues also hampered consultations. For example, most committee staff, particularly outside the trade and agriculture committees, often did not feel that they had the time they needed to review the information USTR shared with them on the status of the negotiations and, in turn, provide meaningful input. Although USTR reports that it has already taken the step of providing committee staff that have security clearances with the negotiating text 5 days in advance, several committee staff told us they frequently have less time. Staff also need to obtain security clearances if they want to be able to access the classified negotiating text; however, some key staff still lack clearances. In addition, some staff with clearances are only able to access text through a cumbersome paper process, while others enjoy electronic access through USTR’s secure Web site. Discussing changes from previously proposed text also appears essential to ensuring trust and effective communication. However, both committee staff and former USTR negotiators commented that process changes alone cannot resolve the issues at the heart of the consultation controversy. They said that the political will to engage in meaningful consultations is key and that consultations only work as well as the political relations and good faith of players. Just over half of the private sector advisory committee chairs we spoke with said they were adequately consulted and told us that having direct access to administration officials is valuable. Nevertheless, our work suggests that tight reporting time frames and delays in finalizing text often compromise committees’ ability to provide an advisory opinion within 30 days as to whether agreements promote U.S. economic interests, achieve negotiating goals, and provide for equity and reciprocity, as TPA required. The ITC faces similar challenges in securing text or agreement details that can impede its ability to prepare required reports within statutory time frames. Finally, delays in both committee rechartering and member appointments have led to prolonged lapses in some committees’ ability to convene and provide advice. Current reporting by the administration on the trade advisory committee status does not provide sufficient transparency, so Congress may be unaware of some committees’ inability to meet and how statutory representation requirements are achieved. As a result, to effectively perform the unique role in U.S. trade policy Congress has given trade advisory committees, certain process issues need to be resolved. To assist the U.S. Trade Representative and the other agencies in improving the operations and input of the trade advisory committees, Congress should consider extending the reporting deadlines for the trade advisory committees and the ITC by 15 days, giving them 45 days and 195 days, respectively. To facilitate better consultations with Congress, we recommend that the U.S. Trade Representative: Take steps to reach agreement with the committees of jurisdiction on the amount of time they need to receive information in advance of consultation meetings in order to afford them better opportunity for meaningful input, and Work together with Congress on ways to improve access to information prior to consultation meetings, such as through security clearances, so that congressional staff can better assess the status of negotiations and provide advice to USTR. To provide transparency and accountability to the composition of the trade advisory committees, we recommend that the Secretaries of Agriculture, Commerce, and Labor work with the U.S. Trade Representative to annually report publicly on how they meet the representation requirements of FACA and the Trade Act of 1974, including clarifying which interest members represent in a manner similar to the Department of Commerce and explaining how they determined which representatives they placed on committees. To assure Congress that it is receiving the private sector advisory opinions that it intended in the Trade Act of 1974, we recommend that the Secretaries of Agriculture and Labor work with the U.S. Trade Representative to take the following two actions: Start the advisory committee rechartering and member appointment processes with sufficient time to avoid any lapse in the ability to hold committee meetings, and Notify Congress if a committee is unable to meet for more than 3 months due to an expired charter or a delay in the member appointment process. To promote greater efficiency in trade advisory committee function, we recommend that the Secretary of Labor work with the U.S. Trade Representative to extend the Labor Advisory Committee charter from 2 years to 4 years, to be in alignment with the rest of the trade advisory committee system. We provided a draft of this report to USTR; the Departments of Agriculture, Commerce, Labor, State, and the Treasury; the Environmental Protection Agency, and ITC. The Department of Commerce provided written comments, which are reproduced in appendix V. It said that the report was generally an accurate summation of the status and impacts of FTAs and provided a good overview of some of the complexities associated with negotiating an FTA. USTR; the Departments of Agriculture, Commerce, and Labor; the Environmental Protection Agency, and ITC provided us with technical comments, which we have incorporated where appropriate. The Departments of State and the Treasury had no comments. USTR staff also commented to GAO on the proposed recommendations regarding statutory representation requirements in advisory committee composition and consultation with Congress. GAO incorporated these comments as appropriate in the final report. USTR indicated that it would report on the actions taken in response to the recommendations in a letter as required under U.S. law. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees; the U.S. Trade Representative; the Departments of Agriculture, Commerce, Labor, State, and the Treasury; the Environmental Protection Agency; and ITC. We also will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-4128 or yagerl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. To determine how Trade Promotion Authority (TPA) has been used in negotiation of free trade agreements (FTA), we reviewed: (1) What FTAs have been pursued under TPA and why? (2) Overall, what is the economic significance of these agreements to the United States? (3) What is the nature of the consultation process for Congress, and how well has it worked in practice? (4) What is the nature of the consultation process for trade advisory committees and other stakeholders, and how well has it worked in practice? To answer these questions, generally we reviewed documents and interviewed officials responsible for international trade policy and negotiations at the Office of the U.S. Trade Representative (USTR); the Departments of Agriculture, Commerce, Labor, State, and the Treasury; and the Environmental Protection Agency, as well as officials of the U.S. International Trade Commission (ITC). To determine what FTAs have been pursued under TPA and why, we reviewed USTR documents and interagency memoranda discussing FTA partner selection and updated our findings from our prior work on FTA partner selection. We also interviewed relevant executive branch agency officials, both current and former, in order to gain the perspectives of those officials involved with the earlier FTAs negotiated under TPA. In addition, we interviewed congressional staff from the House and Senate trade and agriculture committees, as well as other committees of jurisdiction, and over half of the trade advisory committee chairs, in order to learn what input they had into the partner selection process. To determine the overall economic significance of these FTAs, we analyzed official U.S. trade and investment data, as well as selected studies and analyses from USTR, ITC, and trade experts. U.S. goods trade statistics are from the Bureau of the Census, and are through 2006. U.S. services trade and investment statistics are from the Bureau of Economic Analysis, and are through 2005, which is the most recent year available. For the purpose of analyzing the overall U.S. trade and investment relationship with TPA and non-TPA trade partners, we determined that these data are sufficiently reliable. Where we combined the two data sets to show the share of total trade (imports plus exports of goods plus services), the modest changes that occur from year to year would only have a minimal effect on the shares reported and no effect on the overall findings. We also grouped detailed U.S. goods trade statistics into two broad categories: agriculture and manufacturing based on the Harmonized Tariff Schedule product chapters. Chapters 1 through 24 are agriculture and the remaining nonagricultural chapters are manufacturing. ITC maintains the official U.S. tariff schedule. A complete list of the product chapters of the Harmonized Tariff schedule can be found at www.usitc.gov. Finally, in order to analyze the growth of U.S. goods trade flows over time we used Bureau of Labor Statistics import and export price deflators at the most disaggregated level available to adjust U.S. trade statistics for inflation from 1992 to 2006. We did not adjust U.S. services statistics since reliable price deflators are not available for the time period we examined. To determine the nature of the congressional consultation process and how well it has worked in practice, we reviewed fast track provisions from the Trade Act of 1974 up through TPA to trace the evolution of the consultation provisions. We also analyzed USTR’s congressional consultation logs in order to determine which committees USTR had provided with consultation meetings, how often, and on which FTAs. We interviewed USTR officials about how the log was compiled and generally found them sufficiently reliable for the purposes of this report. In addition, we interviewed current and former USTR officials who had been involved in providing the FTA consultations, as well as current and former staff of congressional committees that had participated in these consultation meetings, in order to obtain their descriptions of the consultation process and their views on what had worked well and what could be improved. In our congressional interviews, we interviewed both House and Senate committees, including both majority and minority staffs, of all the trade, agriculture, and other committees of jurisdiction that had been involved in these consultations. The committees of jurisdiction comprised the following: Senate Finance and House Ways and Means, Senate Agriculture, Nutrition, and Forestry and House Agriculture, Senate Commerce, Science and Transportation and House Natural Resources (fisheries subcommittees), Senate and House Judiciary (intellectual property rights subcommittees), Senate Banking, Housing, and Urban Affairs and House Financial Services, Senate Commerce, Science and Transportation (telecommunications staff) and House Energy and Commerce (telecommunications subcommittee), Senate Homeland Security and Governmental Affairs and House Oversight and Government Reform. Of the 28 committee staffs (from 7 Senate committees and 7 House committees, each with majority and minority staffs) that we contacted, staff of 18 (64 percent) agreed to be interviewed. The views of the committee staff we interviewed are not necessarily representative to all relevant Senate and House Committees. We interviewed staff of all 4 trade committee staffs, as well as 4 former trade committee staff in order to assure coverage back to the beginning of TPA in 2002, due to staff turnover on some committees. We also interviewed former committee staff of the other committees of jurisdiction when there had been turnover on the staff and the current staff were not sufficiently familiar with the process to comment and referred us to the appropriate former staff. To determine the nature of the consultation process for the trade advisory committees and how well it worked in practice, we reviewed relevant provisions in the Trade Act of 1974, the Federal Advisory Committee Act (FACA), and TPA governing the establishment and function of the committees as well as their reporting requirements and time frames. We obtained and analyzed committee meeting records and charter and roster information from both designated agency officials and through the FACA database maintained by the General Services Administration. We interviewed a nongeneralizable sampling of 27 trade advisory committee chairs. We interviewed the first tier and all of the relevant second tier chairs. For the third tier, we interviewed a judgmental sample of half of the chairs—half of the agricultural technical advisory committee chairs and half of the industry trade advisory committee chairs—representing a cross section of both agriculture and industry, as well as select committee members referred to us by the chairs for their alternative views. Altogether, we interviewed 16 of the 27 chairs and 5 additional members. The views of the trade advisory committee chairs with whom we spoke are not necessarily representative of all committee chairs. We also selected four other stakeholders to interview, based on literature and background research, recommendations from trade experts, and participation in public hearings held for each FTA. These stakeholders were trade experts in the nongovernmental organization and academic communities. In addition, we interviewed the executive branch agency officials responsible for overseeing the committees, at USTR and the Departments of Agriculture, Commerce, and Labor. We also interviewed agency officials from the ITC. Finally, we updated findings from our prior work on the trade advisory committees through interviews and document review and analysis. We conducted our work from January 2007 to August 2007 in accordance with generally accepted government auditing standards. This appendix briefly reviews the evolution of congressional consultation requirements under TPA. In general, consultation requirements have expanded under each renewal of authority. The Trade Act of 1974 was the first grant of fast track authority, which later became known as trade promotion authority. It established the basic consultation framework, including required notifications, consultations with congressional committees, the advisory committee system, and the accreditation of 10 Members of Congress to serve as official advisors to the U.S. delegation of negotiators. The Trade Agreements Act of 1979 extended fast track authority but made no significant changes. The next renewal came through the Trade and Tariff Act of 1984. This act added a new requirement that the President notify Congress of intent to begin trade negotiations at least 60 days in advance. Either the House Ways and Means or the Senate Finance committees could deny fast track consideration by disapproving of the negotiation within 60 days of the notification. This provision became known as the “gatekeeper” provision. In at least one instance, Congress reportedly used the provision as a tool to successfully influence the administration. The Omnibus Trade and Competitiveness Act of 1988 continued the previous consultation requirements and added that the Congress could withhold a trade agreement from fast track consideration by passing resolutions of disapproval if it determined that the President had failed to adequately consult with Congress. In addition, the 1988 act extended fast track procedures only for 3 years but allowed an extension of fast track procedures for an additional 2 years if the President requested the extension, and Congress did not pass a resolution disapproving of the extension. The Trade Act of 2002 included all of the consultation requirements of previous acts with the exception of the gatekeeper provision. Instead of giving the two main trade committees the power to essentially veto potential trading partners before negotiations begin, the 2002 act replaced the 60-day notification of intent to begin negotiations with a 90-day notification. The Trade Act of 2002 also established the Congressional Oversight Group (COG) as an additional consultation mechanism. This appendix provides detailed information on U.S. goods trade (table 5) and services trade (table 6) with U.S. trade partners grouped by whether the United States pursued an FTA with them under TPA, already had an existing FTA with them, or did not pursue an FTA with them. It also provides information on U.S. foreign direct investment in these countries (table 7) and the countries’ average applied tariff rates (table 8). The U.S. trade and investment relationship with countries that the United States has chosen to pursue FTAs under TPA differs from that with non- FTA countries in several ways. The United States tends to (1) maintain more balanced trade with TPA countries, (2) export relatively more manufactured goods (compared with services and agriculture), and (3) have relatively faster investment growth with TPA countries, particularly in countries with FTAs in force. The overall U.S. trade deficit has been large and growing for many years. Much of the gap between exports and imports has been driven by increased imports from Asian countries, including China and Japan. In contrast, the United States has relatively more balanced trade with the group of countries pursued under TPA. Figure 9 shows the goods trade balances for TPA countries, existing FTA partners (e.g., Canada and Mexico), and non-FTA countries (e.g., EU, Japan, China, India). The trade balance with TPA countries is in deficit overall, but the deficit is relatively smaller and has deteriorated less rapidly than the much larger deficit with non-FTA countries. Moreover, the trends in the U.S. trade deficit vary across the groups of countries pursued under TPA. For TPA countries with which the United States has put in force the FTA agreements (e.g., Australia, Chile, CAFTA- DR), the goods trade balance is in surplus. Figure 10 shows TPA countries by the status of the FTA negotiations: in force, concluded but not yet in force, and pursued but not yet concluded. For FTA agreements that have been put in force, the United States maintains a small but growing trade surplus. For agreements that have been concluded but not yet in force, the United States maintains a trade deficit that has declined in recent years. Finally, for countries in which the United States pursued an FTA agreement, but has not yet completed negotiations, the trade deficit has been negative and growing. Relative to services and agriculture, manufacturing products comprise a higher share of total U.S. exports to TPA countries (70 percent) compared with non-FTA countries (59 percent), as shown in figure 11. This is mirrored by a relatively smaller share of services exports to TPA countries (26 percent) compared with non-FTA countries (36 percent). In addition, while U.S. manufacturing exports to both TPA and non-FTA countries are growing at similar rates (between 10-11 percent annually, from 2002–2006 based on a compound annual growth rate), U.S. services exports to TPA countries are growing more slowly than U.S. services exports to non-FTA countries (5 percent for TPA countries, versus 10 percent for non-FTA countries). Table 6 in appendix III shows growth rates for services trade. Manufacturing includes all nonagricultural goods trade. U.S. trade in goods statistics are for 2006. U.S. trade in services statistics are for 2005, which is the most recent year available. See appendix I for more information on our methodology and product composition. In terms of U.S. imports, manufacturing products comprise a much larger share of both TPA and non-FTA imports—80 and 82 percent, respectively, than manufacturing comprises in U.S. exports to these groups. Figure 12 shows the composition of U.S. imports from both groups. While U.S. services imports are relatively similar for TPA and non-FTA countries, agricultural imports from TPA countries (7 percent) are much larger as a share of total imports from TPA countries, compared to non-FTA countries (2 percent). In addition, agricultural imports from TPA countries have also been growing faster (9 percent annually) compared to imports of agricultural products from non-FTA countries (6 percent annually) from 2002 to 2006, based on a compound annual growth rate. U.S. direct investment abroad (or foreign direct investment, FDI) in TPA countries has grown more rapidly than investment in non-FTA countries, particularly in recent years since the conclusion of FTA agreements. Table 9 shows that U.S. FDI in TPA countries registered a compound annual growth rate of 9 percent between 1996 and 2005, and a 13 percent compound annual growth rate since 2002. For TPA countries in which an FTA with the United States is already in force, the compound annual growth rate was 20 percent from 2002 to 2005. In comparison, U.S. direct investment in non-FTA countries grew at a compound annual growth rate of 7 percent over the same period. In addition to the individual named above, Kim Frankena, Assistant Director; Leyla Kazaz; Tim Wedding; Judith Williams; Gezu Bekele; Tina Hodges; and Arthur Lord made key contributions to this report. Other contributors include Grace Lui, Martin De Alteriis, and Karen Deans.
Congress granted the President Trade Promotion Authority (TPA) to negotiate agreements, including free trade agreements (FTA) in 2002. TPA stipulated negotiating objectives and procedural steps for the administration, including consulting with Congress and trade advisory committees. TPA lapsed in July 2007 amidst questions about its use. GAO was asked to review: (1) What FTAs have been pursued under TPA and why? (2) Overall, what is the economic significance of these agreements for the United States? (3) What is the nature of the consultation process for Congress and how well has it worked in practice? (4) What is the nature of the consultation process for trade advisory committees, and how well has it worked in practice? GAO interviewed staff of the Office of the U.S. Trade Representative (USTR), the International Trade Commission (ITC), congressional committees with jurisdiction, trade advisory committees, and others, and reviewed USTR documents. In the 5-year period thatTPA was granted to the President, from 2002-2007, the United States pursued 17 FTAs with 47 countries for a variety of foreign and economic policy reasons. Six FTAs have been approved and are in force, and negotiations for another 4 FTAs have been concluded. The United States has simultaneously pursued comprehensive, high-standard trade agreements on the bilateral and multilateral levels. Trade with countries for which FTAs were pursued under TPA comprises about 16 percent of U.S. trade and foreign direct investment. Twenty-seven percent of U.S. trade is with countries with FTAs in force prior to TPA (e.g., Canada and Mexico); 56 percent is with countries with which the United States does not have FTAs. The largest U.S. trade partners not pursued under TPA are the European Union, Japan, and China; the rest account for relatively small shares of U.S. trade. USTR held 1,605 consultations with congressional committee staff from August 2002 through April 2007, but satisfaction with the consultations was mixed. About two-thirds of these meetings were with the House and Senate trade and agriculture committees. Almost all the congressional staff GAO contacted viewed the consultations as providing good information, but slightly more than half said that they did not provide opportunities for real input or influence. These staff often said that they were not given sufficient time to provide meaningful input. The trade advisory committee chairs GAO contacted said that USTR and managing agencies consulted with their committees fairly regularly, although process issues at times hindered some from functioning effectively. For example, about half said that the 30-day deadline for reporting on the likely impact of FTAs can be difficult to meet, and the ITC had a similar problem. In addition, adherence to statutory representation requirements is not always transparent. Several committees have not been able to meet while their charters were expired, or members had not been reappointed. However, USTR and managing agencies are not required to report to Congress such lapses in a committee's ability to meet.
DOD’s health care system, known as the Military Health System, is one of the largest and most complex health care systems in the nation. Operationally, DOD’s Military Health System has two missions: supporting wartime and other deployments, known as the readiness mission, and providing peacetime care, known as the benefits mission. The readiness mission provides medical services and support to the armed forces during military operations and deployments, including deploying medical personnel and equipment throughout the world, and ensures the medical readiness of personnel prior to deployment. Within DOD’s Office of the Under Secretary of Defense for Personnel and Readiness, the Office of the Assistant Secretary of Defense for Health Affairs oversees the Military Health System and also issues guidance to DOD components on medical matters. The Departments of the Army and the Navy each have a medical command, headed by a Surgeon General, who manages each department’s respective military treatment facilities and medical personnel. The Navy’s Bureau of Medicine and Surgery supports both the Navy and the Marine Corps. The Air Force Surgeon General, through the role of medical advisor to the Air Force Chief of Staff, exercises similar authority to that of the other Surgeons General. The Under Secretary of Defense for Personnel and Readiness also has the responsibility for developing the overall policy and guidance for the department’s sexual assault prevention and response program except for criminal investigative policy matters assigned to the DOD Inspector General and legal processes in the Uniform Code of Military Justice. DOD Directive 6495.01 defines sexual assault as “intentional sexual contact characterized by use of force, threats, intimidation, or abuse of authority, or when the victim does not or cannot consent. Sexual assault includes rape, forcible sodomy (oral or anal sex), and other unwanted sexual contact that is aggravated, abusive, or wrongful (including unwanted and inappropriate sexual contact), or attempts to commit these acts.” DOD’s directive provides active duty servicemembers with two options for reporting a sexual assault: (1) restricted and (2) unrestricted. DOD’s restricted reporting option allows sexual assault victims to confidentially disclose an alleged sexual assault to select individuals, including health care personnel, and receive medical treatment without initiating an official investigation. In cases where a victim elects restricted reporting, first responders—including health care providers—may not disclose confidential communications or information on the forensic examination to law enforcement or command authorities unless certain exceptions apply, and improper disclosure of confidential communications and medical information may result in discipline pursuant to the Uniform Code of Military Justice or other adverse personnel or administrative actions. In contrast, DOD’s unrestricted reporting option allows sexual assault victims to receive medical treatment and request an official investigation of the allegation using existing reporting channels, such as their chain of command or law enforcement. DOD’s directive also identifies the various types of support, to include the coordination of medical and mental health care services, that shall be provided to victims of sexual assault. Specifically, the directive specifies that sexual assault victims shall receive timely access to comprehensive medical treatment, including emergency care that shall consist of emergency medical care and the offer of a sexual assault forensic examination consistent with Department of Justice protocols.sexual assault victims shall be advised that even if a forensic examination is declined, the victim is encouraged (but not required) to receive medical care, psychological care, and victim advocacy. Since 2008 we have issued a series of reports examining DOD’s implementation of its Sexual Assault Prevention and Response Program and made a total of 25 recommendations with which DOD has generally concurred and taken actions to implement to varying degrees. These reports include reviews of DOD’s sexual assault prevention and response programs for the military academies; programs for the active components of DOD, including during deployments; and processes for investigating and adjudicating allegations of sexual assault. For further information on these reports as well as our prior recommendations, see the summary we issued in March 2012. DOD has developed policies and guidance that include female-specific aspects to help address the health care needs of female servicemembers during deployment. Prior to deploying, servicewomen are screened for potentially deployment-limiting conditions. According to DOD officials and health care providers with whom we met, such pre-screening helps ensure that many female-specific health care needs are addressed prior to deployment. Further, DOD components have conducted reviews of the health care needs of servicewomen during deployments. DOD also collects health care data on the medical services provided to deployed servicewomen in Afghanistan and aboard Navy vessels. DOD components have put in place policies and guidance that include female-specific aspects to help address the health care needs of servicewomen during deployment. DOD and service officials told us that while the department’s policies are generally gender-neutral and focus on addressing the health care needs of all servicemembers, some of the policies and guidance include female-specific aspects such as pregnancy, pelvic examinations, and screening mammography. In certain instances, the services’ policies reflect clinical practice guidelines that come from outside the department, such as those from the American College of Obstetricians and Gynecologists. For example, we found that the Army changed its pre-deployment screening requirements due to a change in American College of Obstetricians and Gynecologists guidelines for cervical cytology screening. Additionally, we found that Navy guidelines require the provision of standbys—individuals who could be present during sensitive or potentially compromising physical examinations— during medical examinations when female genitalia or breasts are exposed or examined by a medical provider, in accordance with Joint Commission guidelines. According to DOD and service officials, although there may be some gender differences for particular diagnoses, behavioral health care services—that is, mental health care and substance abuse counseling— are not gender-specific. The treatment of servicemembers’ behavioral health care needs and the availability of services to treat those needs, therefore, do not vary based on gender. DOD has established a medical tracking system for assessing the medical condition of servicemembers to help ensure that only those who are medically and mentally fit are deployed outside of the United States. According to service officials and health care providers with whom we met, pre-deployment screenings help ensure that many women’s health care needs are addressed prior to deployment. As part of DOD’s pre- deployment screening process, servicemembers of both sexes are screened for potentially deployment-limiting medical conditions that would render them unsuitable to perform their duties during deployment. Servicemembers of both sexes are also required to complete a pre- deployment health assessment questionnaire. DOD requires that servicemembers’ questionnaires be reviewed by a health care provider to determine whether the servicemember is fit to deploy. Service officials we spoke with told us that this screening also provides servicemembers an opportunity to discuss and address with a health care provider any health concerns they may have prior to deploying. The officials said they rely on the questionnaires, reviews of servicemembers’ medical records, and physical examinations to identify an individual’s health care needs prior to deployments. Some deployment-limiting conditions are female-specific: for example, each of the military services defines pregnancy as a deployment-limiting condition. Each of the services has also established a postpartum deferment period—6 months for the Army, the Air Force, and the Marine Corps, and 12 months for the Navy. During this period, servicewomen are not required to deploy or redeploy, so as to enable mothers to recover from childbirth and to bond with their children. However, each of the military services has a policy that allows servicewomen to voluntarily deploy before the period has expired. Typically, during deployment servicewomen who are confirmed to be pregnant may not remain deployed. For example, servicewomen who are confirmed to be pregnant in Afghanistan may not remain in theater and must notify their military chain of command or supervisor immediately. They are required to be redeployed within 14 days of receipt of notification. Navy guidance prohibits a pregnant servicewoman from remaining aboard a vessel if the time required to transport her to emergency obstetric and gynecological care exceeds 6 hours. Servicewomen who are confirmed to be pregnant at sea are to be sent at the earliest opportunity to the closest shore-based U.S. military facility that can provide obstetric and gynecological care. Navy medical providers we met with during our site visits stated that pregnant servicewomen are typically transferred off the vessel within days of confirmation of their pregnancy. Further, we found that female-specific deployment-limiting conditions sometimes depend on the deployed environment: for example, women with conditions such as recurrent pelvic pain or abnormal vaginal bleeding are disqualified from submarine service. DOD components have conducted reviews of the health care needs of servicewomen while they are deployed. For example: As part of a review the Army Surgeon General’s office initiated in 2011, the Army issued a white paper entitled “The Concerns of Women Currently Serving in the Afghanistan Theater of Operations.” The TriService Nursing Research Program funds and supports scientific research in the field of military nursing in order to advance military nursing science and optimize the health of military members and their families. military women’s health. The program is also funding research efforts focused on deployed women’s health issues, including the use of female urinary diversion devices and a review of the health education provided to servicewomen before they deploy. DOD is collecting health care data on the medical services provided to deployed servicewomen, as well as servicemen, in Afghanistan and aboard Navy vessels. According to service officials, data that health care personnel enter into electronic systems on servicemen and women’s encounters with providers are accessible by commanders in order to allow them to track the medical status of units and individuals. According to information provided by service officials in Afghanistan, the total number of reported patient encounters in U.S. Central Command’s area of operations during fiscal year 2012 was around 460,000. Of these, servicewomen accounted for about 62,000 patient encounters. For U.S. Central Command’s area of operations, DOD’s fiscal year 2012 data show that the most frequent diagnosis for servicemembers, based on International Classification of Diseases codes, was lumbago, or, lower back pain. Of the top 25 diagnoses, none were related specifically to women’s health issues. According to information provided by the Office of the Navy Surgeon General, the total number of reported patient encounters aboard Navy vessels during fiscal year 2012 was approximately 69,000, of which servicewomen accounted for about 21,000. For Navy vessels, based on International Classification of Diseases codes the Navy’s data show that the most frequent diagnosis during fiscal year 2012 for servicemembers was lumbago. Of the top 25 diagnoses, only one―urinary tract infection―was commonly associated with women’s health. The department also uses the data to develop reports that address broader health issues. For example, the Armed Forces Health Surveillance Center has issued reports that provide, by service, data on deployment-related conditions of special interest, such as traumatic brain injury, amputations, and severe acute pneumonia, among other data. While these reports generally do not separate data by gender, the Armed Forces Health Surveillance Center has issued two reports since December 2011 focusing on women’s health issues. For example, a July 2012 report presented data on the incidence of acute pelvic inflammatory disease, ectopic pregnancies, and iron deficiency among active duty women, as well as data on selected conditions among women after initial and repeated deployments to Afghanistan and Iraq. According to these reports, from January 2003 through December 2011, based on International Classification of Diseases codes, 50,634 servicemembers— comprising 6,376 females and 44,258 males—were evacuated from Iraq and Afghanistan for medical reasons. The most frequent causes of medical evacuations for females were mental disorders, musculoskeletal disorders, “signs, symptoms, and ill-defined conditions,” and non-battle- injuries, whereas the most frequent causes of such evacuations for males were battle injuries, musculoskeletal disorders, non-battle injuries, and mental disorders. The health care services, and in turn the female-specific health care services available to deployed servicewomen, vary depending on the deployed environment. DOD provides three levels of health service support to servicemembers deployed to Afghanistan. The most basic level of care is provided at “Role 1” facilities, which include primary care facilities and outpatient clinics. “Role 2” facilities provide advanced trauma management and emergency medical treatment. The highest level of care that DOD provides in Afghanistan is at “Role 3” facilities. These facilities are equivalent to full-spectrum hospitals and are staffed and equipped to provide resuscitation, initial wound surgery, and post-operative treatment. As of November 2012, there were 143 facilities in Afghanistan providing Role-1 level care, 24 facilities providing Role-2 level care, and 5 facilities providing Role-3 level care. According to senior medical officials with U.S. Forces Afghanistan and the International Security Assistance Force Joint Command, most gynecological care is provided at Role 1 facilities, and infantry battalions and most forward operating bases and combat outposts in Afghanistan can at a minimum provide Role 1-level care. We found that servicewomen while deployed at sea have access to providers of primary care, although the health care services that are available aboard Navy vessels largely depend on the type and class of vessel. Larger vessels generally offer a wider range of services—including specialized services—than do smaller vessels, due largely to their more robust crew levels and capabilities. The medical department of an aircraft carrier, for example, typically consists of more than 40 billets, including a family practitioner, a physician’s assistant, and a clinical psychologist. Similarly, the medical department of a WASP-class amphibious assault ship consists of more than 20 billets, including a medical officer. For cruisers, destroyers, and frigates, the medical department typically consists of only a handful of billets, including an Independent Duty Hospital Corpsman, but no medical officer. For Ohio-class submarines, the sole source of medical care aboard is an Independent Duty Hospital Corpsman. Each of these classes of vessels is capable of providing health care services to servicemembers of both sexes. At the 15 selected locations we visited in Afghanistan and Navy vessels, health care providers and servicewomen told us that the health care services available to deployed servicemembers generally meet the needs of servicewomen. Health care providers we spoke with in Afghanistan and aboard Navy vessels told us they were capable of providing a wide range of female-specific health care services—including treating certain gynecological conditions such as urinary tract infections and conducting clinical breast examinations—that women might seek while deployed. They also told us that servicemembers had access at least to basic mental health care services. Some female-specific services—such as treatment for an abnormal PAP smear result, or mammography services—were not always available, but providers told us that conditions resulting in the need for more specialized services were routinely addressed prior to deployment. For example, providers with an expeditionary medical group we met in Afghanistan told us that in their experience PAP smears are rarely performed in theater except for women who had received abnormal PAP smear results prior to deploying and needed follow-up checks after 6 months. Those providers also told us that screening mammography is not provided in theater because screening mammography is generally preventative care, which is conducted as part of a woman’s annual exam prior to deployment. Health care providers from multiple Navy vessels we visited also told us that a number of female-specific health care services—from performing PAP smears to treating patients with abnormal PAP smear results to mammography services—were not needed during deployments at sea because such services were provided prior to deploying. According to health care officials and providers with whom we met, women who developed acutely urgent conditions during deployments, to include female-specific conditions, would typically be transferred to a locale offering access to more specialized services. Health care providers with whom we met were able to identify their available options for referring individuals with acutely urgent conditions for specialized care elsewhere if necessary—in Afghanistan, typically, to a higher level of care; during deployments at sea, to another vessel or a shore-based facility. Providers also noted that in some cases they could consult with other health care providers if necessary, including providers specializing in women’s health care. For example, at one Role 1 facility we visited in Afghanistan, health care providers noted that their Deputy Command Surgeon specialized in obstetrics and gynecology and was available to consult on cases if they needed assistance. As another example, Navy Independent Duty Hospital Corpsmen told us that they could consult with their physician supervisor if necessary during deployments at sea. At each of the locations we visited we found that a variety of steps were being taken to help ensure that servicewomen had a reasonable amount of privacy during examinations as well. For example, each of the locations we visited offered at a minimum a medical examination room with privacy curtains that could be drawn. In most instances, doors with locks were available as well. We also observed that signs could be posted indicating that an examination was in process. Further, health care providers told us that a standby—individuals who could be present during sensitive or potentially compromising physical examinations—was available at each location we visited. Figures 1 through 3 show photographs of the medical examination rooms at selected locations we visited. Based on information provided by the 92 servicewomen we interviewed at selected locations in Afghanistan and aboard Navy vessels, the responses from 60 indicated that they felt the medical and mental health needs of women were generally being met during deployments, whereas the responses from 8 indicated they did not feel the medical and mental health needs of women were generally being met during deployments. The responses from an additional 8 servicewomen suggested that they had a mixed opinion as to whether the medical and mental health needs of women were being met during deployments, and 16 told us they did not know or did not have an opinion. Servicewomen who indicated during our interviews that the medical and mental health needs of women were generally being met during deployments offered a variety of reasons for their responses. At one location we visited in Afghanistan, a female Airman told us that if she had a health problem, the medical facility at her location could treat her or send her elsewhere if needed. She further noted that if the problem were serious enough she could be evacuated. Similarly, a female Army soldier we met at another location told us she felt some of the best care that she has received in her life has been military health care. At another location, a female Marine told us that the care provided to her was as good as she could imagine, given the operating environment. Aboard one Navy vessel we visited, a female sailor told us that even though mental health care was not available aboard her ship, it was available ashore, and the ship could handle emergencies at sea. Servicewomen we interviewed who indicated that they felt the medical and mental health needs of women were generally not being met during deployments offered a variety of reasons for their responses as well. At one location we visited in Afghanistan, a female airman told us that she believed the military was trying to meet the health needs of women, but still had work to do—noting, for example, that a medication she was prescribed had given her yeast infections. At another, a female Army soldier told us that she had experienced difficulty obtaining sleep medication. In the case of deployments at sea, one female sailor expressed concern that a mental health provider was not aboard. Of servicewomen who offered a mixed opinion, one female sailor told us that she felt junior health care providers were limited in the types of procedures they could perform and lacked practical experience. DOD has taken steps to address the provision of medical and mental health care for servicemembers who are sexually assaulted, but several factors affect the extent to which this care is available. Specifically, the branch of military service and the operational uncertainties of a deployed environment can affect the ready availability of medical and mental health care services for victims of sexual assault. Additionally, care is in some cases affected because military health care providers do not have a consistent understanding of their responsibilities in caring for sexual assault victims who make restricted reports of sexual assault. Further, first responders such as Sexual Assault Response Coordinators and Victim Advocates are not always aware of the specific health care services available to sexual assault victims at their respective locations. Each military service offers medical and mental health care resources to servicemembers who have been sexually assaulted, including those serving in a deployed environment. However, as we have noted in our prior work, the availability of such resources for victims can vary based on a number of factors, including branch of military service and the operational uncertainties associated with serving in a deployed environment. For example, the availability of deployed medical providers who are trained to conduct a sexual assault forensic examination varies across the military services because each service has a different process for deploying personnel. Specifically, we spoke with Army officials who told us that the Army requires each brigade to deploy with a health care provider who is trained to conduct a forensic examination, whereas the Air Force deploys trained health care providers based on the medical needs at specific locations. Navy medical providers we spoke with told us that the Navy does not require that its vessels deploy with a provider trained to conduct a forensic examination, and will instead transfer a victim to the nearest trained provider, whether at sea or ashore. Navy medical providers also told us that if a transfer is not possible they would do their best to conduct the forensic examination using the instructions provided with examination kits. Department of Defense, Fiscal Year 2011 Annual Report on Sexual Assault in the Military (April 2012). treatment. To mitigate these limitations, the Army included a primary and alternate evacuation protocol in its standardized operating procedures, to help ensure that servicemembers who are sexually assaulted during deployment have access to care. DOD has established policies and procedures for its sexual assault prevention and response program that address, among other things, the provision of medical and mental health care for servicemembers who are sexually assaulted. Specifically, in October 2005 DOD published a directive that contains its comprehensive policy for the prevention of and response to sexual assault. While generally applicable to all servicemembers and locations, DOD’s directive calls for the sexual assault prevention and response program to be gender-responsive, culturally competent, and recovery-oriented; and for an immediate, trained sexual assault response capability to be available in deployed locations. For example, DOD requires care for sexual assault victims to be linguistically appropriate; sensitive to gender-specific issues such as pregnancy; and supportive of a victim’s ability to be fully mission-capable and engaged. In June 2006, DOD’s Office of the Under Secretary of Defense for Personnel and Readiness issued an instruction that provides guidance for implementing its policy and specifies roles, responsibilities, and required training for program personnel such as health care providers who may be For example, DOD’s involved in responding to victims of sexual assault. instruction identifies various types of health care providers who, depending on their training, may be eligible to conduct sexual assault forensic examinations; and directs the military services to establish a multi-disciplinary case management group and include provisions for continuity of victim care when a victim has a temporary or permanent change of station, or is deployed. Additionally, DOD’s instruction identifies required categories of training for program personnel on topics that include victim advocacy and medical treatment resources, sexual assault response policies, and the sexual assault examination process. Department of Defense Instruction 6495.02, Sexual Assault Prevention and Response Program Procedures (June 23, 2006). Although DOD issued this overarching instruction that provides guidance for implementing its sexual assault prevention and response policies to personnel such as health care providers, we found that the Office of the Assistant Secretary of Defense for Health Affairs—the organization responsible for ensuring the effective execution of the department’s medical mission—has not, in turn, developed more specified guidance to address the military services’ responsibility to provide specialized medical and mental health care to victims of sexual assault. According to DOD Directive 5136.01, the Assistant Secretary of Defense for Health Affairs is required to, among other things, exercise authority, direction, and control over DOD medical policy, and to establish policies, procedures, and standards that govern the management of DOD health and medical programs. The Office of the Assistant Secretary of Defense for Health Affairs has performed these responsibilities for some medical issues in DOD, but it has not established guidance for the treatment of injuries stemming from sexual assault—a crime that requires a specialized level of care to help ensure that forensic evidence is properly collected, medical care is provided in a way that minimizes the risk of revictimization, and a victim retains the right to disclose the assault with confidentiality. Absent department-level guidance from DOD’s Office of the Assistant Secretary of Defense for Health Affairs, the services have, to varying degrees, revised their respective medical guidance to address care for victims of sexual assault. For example, at one location we visited we reviewed a command’s medical policy and found that while the policy addressed some responsibilities of health care providers in responding to sexual assault incidents, it had not been updated to identify how care should be modified for restricted reports of sexual assault. For example, the policy addressed topics such as when and where forensic examinations should be conducted, and health care provider responsibilities for transferring evidence to law enforcement. However, it did not mention DOD’s policy on restricted reporting or provide guidance, for example, on the use of non-identifying information to label and store evidence collected from restricted reports of sexual assault. At another location, we found that a command’s medical policy contained requirements for health care personnel that conflicted with their responsibilities under restricted reporting. The policy required the command’s medical department representatives to document all injuries and referrals of personnel for care, and to keep the commanding officer and chain of command informed of medical conditions that affect the health, safety, and readiness of all command personnel. However, the policy was silent on the issue of sexual assault and did not identify exceptions to these requirements or offer health care providers alternative procedures for documenting and reporting medical issues associated with restricted reports of sexual assault. Accordingly, we found that military health care providers do not have a consistent understanding of their responsibilities in caring for sexual assault victims. We met with senior medical personnel from the command who confirmed that provisions in their medical policy conflicted with other command policy and had created confusion for health care providers regarding the extent of their responsibility to maintain the confidentiality of victims who choose to make a restricted report of sexual assault. These inconsistencies can put DOD’s restricted reporting option at risk, undermine DOD’s efforts to address sexual assault issues, and erode servicemembers’ confidence. As a consequence, sexual assault victims who want to keep their case confidential may be reluctant to seek medical care. DOD requires that personnel designated as first responders to sexual assault incidents, whether in the United States or in deployed environments, receive initial and annual refresher training on topics that include available medical and mental health treatment options. Although DOD provides this required training, we found that first responders we met with were still unsure of the health care services available to sexual assault victims at their respective locations. This was particularly the case among first responders we met with during visits to selected locations in the United States, in part because of the increased medical and mental health care options that were available to them. For example, we regularly found that Sexual Assault Response Coordinators, Victim Advocates, and health care personnel differed in their understanding as to where to take a sexual assault victim for a forensic examination—a potentially problematic issue, given that the quality of forensic evidence diminishes the later it is collected following a sexual assault. The Department of Justice’s National Protocol for Sexual Assault Medical Forensic Examinations identifies 72 hours after an assault occurs as the standard cutoff time in many jurisdictions for collecting evidence (except for blood alcohol determinations, which should be done within 24 hours of ingestion of alcohol), but notes that evidence collection beyond that point is conceivable. Additionally, we found that not all first responders fulfill the requirement to annually complete refresher training on tasks DOD deems essential to their role in responding to incidents of sexual assault. According to DOD’s instruction, first responders are required to complete periodic refresher training on a variety of topics that include management of restricted and unrestricted reports of sexual assault and local protocols and procedures. DOD reported in its fiscal year 2011 Annual Report on Sexual Assault in the Military that while each of the military services continued to implement sexual assault prevention and response training for first responders, not all first responders had completed the required training. For example, DOD reported that for fiscal year 2011, the Army trained only about 6,000 of the more than 17,000 personnel who served as Sexual Assault Response Coordinators or Victim Advocates. Further, DOD’s report noted that only 69 percent of Department of Navy Victim Advocates—which include Navy and Marine Corps personnel— completed the required training, and the report also noted that some of the training for Air Force first responders was overdue. As women continue to assume an expanding and evolving role in the military it is important that DOD be well positioned to meet the health care needs of deployed servicewomen and ensure their readiness. To the department’s credit, DOD components have taken positive steps toward addressing the female-specific health care needs of deployed servicewomen, and we note that at the selected locations we visited during the course of our review the responses from most servicewomen we spoke with indicated that they felt the medical and mental health needs of women were generally being met during deployments. DOD also has taken positive steps in making medical and mental health care services available to sexual assault victims of both sexes. However, DOD’s limited health care guidance on the restricted sexual assault reporting option and first responders’ inconsistent knowledge about available resources are factors that affect the quality and availability of that care. Left unaddressed, such factors can undermine DOD’s efforts to address the problem of sexual assault in the military by eroding servicemembers’ confidence in the department’s programs and decreasing the likelihood that victims of sexual assault will turn to the programs or seek care and treatment when needed. To help ensure that sexual assault victims have consistent access to health care services and the reporting options specified in DOD’s sexual assault prevention and response policies, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to direct the Assistant Secretary of Defense for Health Affairs to develop and implement department-level guidance on the provision of medical and mental health care to victims of sexual assault that specifies health care providers’ responsibilities to respond to and care for sexual assault victims, whether in the United States or in deployed environments. To help ensure that Sexual Assault Response Coordinators, Victim Advocates, and health care personnel have a consistent understanding of the medical and mental health resources available at their respective locations for sexual assault victims, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness, in collaboration with the military departments, to take steps to improve compliance regarding the completion of annual refresher training on sexual assault prevention and response. In written comments on a draft of this report, DOD stated in its cover letter that, overall, the department did not concur with the report’s findings and conclusions. However, DOD’s cover letter did not provide an explanation for this comment. In an enclosure to its letter, DOD stated that it did not concur with our first recommendation that the Assistant Secretary of Defense for Health Affairs develop and implement department-level guidance on the provision of medical and mental health care to victims of sexual assault that would specify health care providers’ responsibilities to respond to and care for sexual assault victims, whether in the United States or in deployed environments. DOD’s justification of its assessment, however, did not make clear why the department did not concur. Instead, DOD provided examples of steps it has been taking that may help to address the findings in this report. Specifically, DOD stated that, while the second version of DOD Instruction 6495.02, entitled “Sexual Assault Prevention and Response (SAPR) Program Procedures” has been in coordination for nearly 2 years and is not yet published, the revised instruction will be comprehensive and will contain two medical enclosures. According to DOD, the first medical enclosure will address health care provider procedures and direct the Surgeons General of the military services to carry out responsibilities related to the coordination, evaluation, and implementation of care, while the second medical enclosure will address health care providers’ responsibilities related to Sexual Assault Forensic Examination kits. During the course of this review, we met with DOD officials who had knowledge of and were involved in the instruction’s revision, but these officials did not discuss or share their draft revisions with us when we presented our findings to them. We cannot verify, therefore, whether the enclosures referenced in DOD’s comments will address our recommendation. However, we plan to review the instruction when DOD finalizes it to determine whether it meets the intent of our recommendation. Finally, DOD stated that the department meets its oversight responsibilities with regard to sexual assault response through training in graduate medical education and through monitoring and oversight of the process that governs credentialing and privileging of providers. However, it is not clear why this statement is applicable to our recommendation. We did not address these points in the finding that led to this recommendation, and our recommendation is focused on the need for additional guidance. DOD concurred, without comment, on our second recommendation that the Under Secretary of Defense for Personnel and Readiness, in collaboration with the military departments, take steps to improve compliance with completing annual refresher training on sexual assault prevention and response. DOD’s comments are reprinted in appendix II. We are sending copies of this report to the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Assistant Secretary of Defense for Health Affairs, and appropriate congressional committees. In addition, this report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In our review of female-specific health care services provided by DOD to deployed servicewomen, our scope included each of the military services. We focused our work in conducting this review on the health care services available to servicewomen deployed to Afghanistan or aboard Navy vessels at sea. To determine the extent to which DOD is addressing the health care needs of deployed servicewomen, we reviewed legislative requirements and pertinent DOD and service-specific policies and guidance. We also interviewed responsible officials within the Office of the Assistant Secretary of Defense for Health Affairs, each of the services’ medical commands, and the TriService Nursing Research Program and its Women’s Health Research Interest Group. We reviewed guidelines specifically applicable to women, such as guidelines issued by the American College of Obstetricians and Gynecologists, and prior GAO reports. We also obtained information on reported patient encounters for deployed servicemembers for fiscal year 2012 collected by U.S. Central Command and the Navy’s Office of the Surgeon General. To assess the reliability of these data, we contacted cognizant DOD officials in order to understand the processes used to collect these data and any known limitations of the data. We found that while the data likely underreported the total number of patient encounters, these data were sufficiently reliable for the purposes of our report—that is, to provide context for the approximate number of reported patient encounters for servicewomen during fiscal year 2012 and the frequency with which such encounters specifically concerned women’s health by summarizing the top 25 diagnoses. In addition, we conducted a total of 15 site visits where we met with health care providers, military commanders, and female servicemembers to obtain their perspectives on DOD’s efforts to address the health care needs of deployed servicewomen. In Afghanistan we visited 7 military installations, selected so as to enable us to visit each of the three levels of health service support across Afghanistan. Figure 4 shows the locations in Afghanistan we visited during the course of our review, which included Bagram Air Field, Camp Eggers, Camp Leatherneck/Bastion, Camp Phoenix, Camp Stone, Forward Operating Base Fenty, and Forward Operating Base Gardez. We also visited 8 Navy vessels at their home ports in the United States, selected so as to enable us to visit different types of vessels on which women are an integrated part of the crew, for both the U.S. Atlantic and U.S. Pacific Fleets. The Navy vessels we visited included the USS George H.W. Bush (CVN 77), USS Boxer (LHD 4), USS Carl Vinson (CVN 70), USS Chancellorsville (CG 62), USS Georgia (SSGN 729), USS McClusky (FFG 41), USS Mesa Verde (LPD 19), and USS Truxtun (DDG 103). Table 1 provides information on the composition of the crew for each of the Navy vessels we visited. To determine the extent to which female-specific health care services are available to deployed servicewomen, we focused on the following female- specific health care services: clinical breast examination; screening mammography; diagnostic mammography; pelvic examination; PAP smear; treatment of patients having abnormal PAP smear results; treatment for disorders of the female genitals; treatment for disorders of menstruation; pregnancy test; and contraceptives, or contraceptive counseling. We also focused on female-specific behavioral health care services, to include mental health and substance abuse counseling. To determine availability of the services, we obtained information from health care providers during our site visits regarding the availability of these services at that location. If female-specific health care services were not available, we sought to understand how situations requiring the need for such services would be handled during deployments. In the case of the Navy, we also obtained information from senior officials from the Navy Type Commands responsible for overseeing health care units supporting air craft carriers, surface ships, and submarines. To obtain female servicemembers’ perspectives on women’s health and wellness issues, we conducted 92 one-on-one structured interviews with servicewomen from various pay grades and from all services during our site visits to 7 military installations in Afghanistan and 8 Navy vessels. Our objective in using this approach was to obtain female servicemembers’ perspective on a range of women’s health and wellness issues, such as specific health issues and challenges women might face in seeking medical care while deployed. Although the results of our discussions are not generalizable and therefore cannot be projected across DOD, a service, or any single location we visited, the results of our discussions provide insight into the perspectives of servicewomen regarding DOD’s efforts to address the health care needs of deployed servicewomen. Because of the sensitivity of some of the information we were seeking, we took steps to assure a confidential environment and encourage an open discussion during these interviews. Only female GAO analysts conducted these interviews. To determine the extent to which medical and mental health care are available to servicewomen who are victims of sexual assault, we obtained and reviewed various documents, including legislative requirements and DOD’s and the military services’ policies and guidance establishing requirements for the prevention of and response to sexual assault. We also interviewed knowledgeable officials, including officials from DOD’s Sexual Assault Prevention and Response office. We also reviewed DOD’s Annual Report on Sexual Assault in the Military Services to identify the department’s efforts to provide medical and mental health services to the 2,420 females who in fiscal year 2011 reported to DOD that they had been victims of sexual assault. We conducted site visits to 3 other military installations in the United States in addition to the 7 military installations in Afghanistan and the 8 Navy vessels we visited for our review, in order to assess the availability of medical and mental health care services for servicewomen who are victims of sexual assault in the military. To select the additional locations, we requested the military services’ respective Sexual Assault Prevention and Response offices to identify locations that met select criteria. The locations we visited included Camp Pendleton, California; Davis-Monthan Air Force Base, Arizona; and Joint Base San Antonio, Texas. These locations were selected because they enabled us to meet with military personnel who have served as Sexual Assault Response Coordinators both while deployed and while at a military installation in the United States. During our site visits we met with Sexual Assault Response Coordinators, Victim Advocates, and health care providers. Office of the Secretary of the Air Force Office of the Assistant Secretary of the Air Force for Manpower and Reserve Affairs, Washington, D.C. Office of the Air Force Surgeon General Air Force Medical Support Agency, Falls Church, Virginia Office of the Surgeon General, Washington, D.C. Bureau of Medicine and Surgery, Office of Women’s Health, Washington, D.C. Bureau of Naval Personnel Office of Women’s Policy, Washington, D.C. Commander Naval Air Force, U.S. Atlantic Fleet, Norfolk, Virginia Commander Naval Air Force, U.S. Pacific Fleet, San Diego, California Commander Naval Surface Force, U.S. Atlantic Fleet, Norfolk, Virginia Commander Naval Surface Force, U.S. Pacific Fleet, San Diego, Commander Submarine Force, U.S. Atlantic Fleet, Norfolk, Virginia Commander Submarine Force, U.S. Pacific Fleet, Pearl Harbor, Navy and Marine Corps Public Health Center, Portsmouth, Virginia Sexual Assault Prevention and Response Office, Washington, D.C. Headquarters, U.S. Marine Corps (Health Services), Arlington, U.S. Marine Corps Sexual Assault Prevention and Response Office, In Afghanistan, we visited or contacted the following organizations: International Security Assistance Force Joint Command, Afghanistan Task Force Medical-Afghanistan, Afghanistan U.S. Forces Afghanistan, Afghanistan We conducted this performance audit from April 2012 through January 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, key contributors to this report include David E. Moser (Assistant Director), Wesley A. Johnson, Ronald La Due Lake, Kim Mayo, Amanda Miller, Sharon Reid, Cheryl A. Weissman, and K. Nicole Willems. In addition, Carole F. Coffey, Kasea Hamar, David W. Hancock, and Tamiya R. Lunsford provided assistance during site visits.
The roles for women in the military have been expanding and evolving. Servicewomen today are integral to combat, combat support, and counterinsurgency operations, and serve in many roles they previously did not hold. Pub. L. No. 112-81, 725 (2011) mandated that GAO conduct a review of the female-specific health care services provided by DOD to female servicemembers, including the treatment of servicewomen who are victims of sexual assault. In this report, GAO evaluates the extent to which (1) DOD is addressing the health care needs of deployed servicewomen; (2) female-specific health care services are available to deployed servicewomen; and (3) medical and mental health care are available to servicewomen who are victims of sexual assault. GAO reviewed pertinent DOD policies, guidance, and data. GAO also met with health care providers, servicewomen, and others during site visits to 18 locations where servicewomen are currently serving or deployed, including 15 installations in Afghanistan and Navy vessels. The Department of Defense (DOD) is taking steps to address the health care needs of deployed servicewomen. For example, DOD has put in place policies and guidance that include female-specific aspects to help address the health care needs of servicewomen during deployment. Also, as part of pre-deployment preparations, servicewomen are screened for potentially deployment-limiting conditions, such as pregnancy, and DOD officials and health care providers with whom GAO met noted that such screening helps ensure that many female-specific health care needs are addressed prior to deployment. GAO also found that DOD components have conducted reviews of the health care needs of servicewomen during deployments and are collecting data on the medical services provided to deployed servicewomen. At the 15 selected locations GAO visited in Afghanistan and aboard Navy vessels, health care providers and most servicewomen indicated that the available health care services generally met deployed servicewomen's needs. In Afghanistan and aboard Navy vessels, health care providers said they were capable of providing a wide range of the female-specific health care services that deployed servicewomen might seek, and servicewomen GAO spoke with indicated that deployed women's needs were generally being met. Specifically, based on information provided by the 92 servicewomen GAO interviewed, 60 indicated that they felt the medical and mental health needs of women were generally being met during deployments; 8 indicated they did not feel those needs were generally being met during deployments; an additional 8 indicated a mixed opinion; and 16 said they did not have an opinion. For example, some servicewomen told GAO that they were satisfied with their military health care, given the operating environment. Among those who expressed dissatisfaction with their military heath care, GAO heard a concern about difficulty in obtaining medications. Among those who expressed mixed views, a comment was raised that junior health care providers were limited in the types of procedures they could perform and lacked practical experience. DOD has taken steps to provide medical and mental health care to victims of sexual assault, but several factors affect the availability of care. For example, this care can vary by service and can be impacted by operational factors, such as transportation and communication challenges, that are inherent to the deployed environment. Further, military health care providers do not have a consistent understanding of their responsibilities in caring for sexual assault victims because the department has not established guidance for the treatment of injuries stemming from sexual assault--which requires that specific steps are taken while providing care to help ensure a victim's right to confidentiality. Additionally, while the services provide required annual refresher training to first responders, GAO found that some of these responders were not always aware of the health care services available to sexual assault victims because not all of them are completing the required training. Without having a clearer understanding of their responsibilities, health care providers and first responders will be impeded in their ability to provide effective support for servicewomen who are victims of sexual assault. To enhance the medical and mental health care for servicewomen who are victims of sexual assault, GAO recommends that DOD (1) develop department-level guidance on the provision of care to victims of sexual assault; and (2) take steps to improve first responders' compliance with the department's requirements for annual refresher training. DOD did not concur with the first recommendation, but cited steps it is taking that appear consistent with the recommendation. DOD concurred with the second recommendation.
Since EPA was created in 1970, the agency has been responsible for enforcing the nation’s environmental laws. This responsibility has traditionally involved monitoring compliance by those in the regulated community (such as factories or small businesses that release pollutants into the environment or use hazardous chemicals), ensuring that violations are properly identified and reported, and ensuring that timely and appropriate enforcement actions are taken against violators when necessary. Most major federal environmental statutes, including the Clean Water Act, permit EPA to allow states under certain circumstances to implement key programs and to enforce their requirements. EPA establishes by regulation the requirements for state enforcement authority, such as the authority to seek injunctive relief and civil and criminal penalties. EPA also outlines by policy and guidance its views as to the elements of an acceptable state enforcement program, such as necessary legislative authorities and the type and timing of the action for various violations, and tracks how well states comply. Environmental statutes generally provide authority for EPA to take appropriate enforcement action against violators in states that have been delegated authority for these programs when states fail to initiate enforcement action. The statutes also provide that EPA may withdraw approval of a state’s program if the program is not administered or enforced adequately. EPA administers its environmental enforcement responsibilities through its headquarters Office of Enforcement and Compliance Assurance (OECA). While OECA provides overall direction on enforcement policies, and sometimes takes direct enforcement action, it carries out much of its enforcement responsibilities through its 10 regional offices. These offices are responsible for taking direct enforcement action and for overseeing the enforcement programs of state agencies in those instances in which the state has been delegated such enforcement authority. EPA has established principles for its enforcement and compliance program. State guidance, providing the framework for state/EPA enforcement agreements, has been in place since 1986. According to EPA, this state guidance, together with statute-specific guidance, is the blueprint for both EPA and state enforcement and compliance programs and serves as the basis for both authorizing and reviewing state programs. OECA expects the regions to take a systematic approach to administering and overseeing the enforcement programs among delegated and nondelegated programs and, in doing so, to follow the policies and guidance issued for this purpose. While federal and state enforcement officials agree that core enforcement requirements should be generally implemented consistently, according to EPA some variation is to be expected—and, in some cases, encouraged. For example, EPA expects some variation in how regions target resources to the most significant compliance issues in different regions and states, the level of enforcement activity—which should vary with the severity of the problem, and the level of regional oversight of state enforcement programs—with the greater oversight provided for weaker programs. As we noted in our 2000 report on the consistency of EPA’s regions in enforcing environmental requirements, some variation in environmental enforcement is necessary to take into account local conditions and local concerns. At the same time, EPA enforcement officials readily acknowledged that core enforcement requirements must be consistently implemented, and to ensure fairness and equitable treatment, similar violations should be met with similar enforcement responses, regardless of geographic location. However, when we reviewed EPA’s enforcement efforts we found that variations among EPA’s regional offices had led to inconsistencies in the actions they take to enforce environmental requirements. For example, we found that inspection coverage by EPA and state enforcement staff varied for facilities discharging pollutants within each region, the number and type of enforcement actions taken by EPA’s regions also varied, the size of the penalties assessed and the criteria used in determining penalties assessed varied by region, and the regions’ overall strategies in overseeing the states within their jurisdiction varied, which may have resulted in more in-depth reviews in some regional programs than in others. EPA headquarters officials responsible for the water program explained that such variation was fairly commonplace and has posed problems. The director of OECA’s water enforcement division, for example, said that in reacting to similar violations, enforcement responses in certain regions were weaker than in others, and that such inconsistencies had increased. We identified a number of factors that contributed to variations in EPA’s enforcement that included the following: differences in the philosophical approaches among enforcement staff about how to best achieve compliance with environmental requirements, differences in state laws and enforcement authorities, and in the manner in which regions respond to these differences, variations in resources available to both state and regional enforcement the flexibility afforded by EPA policies and guidance that allow states a degree of latitude in their enforcement programs, and incomplete and inadequate enforcement data which, among other things, hamper EPA’s ability to accurately characterize the extent of variations. We also noted in our 2000 report that EPA headquarters enforcement officials were developing performance information that would allow for comparisons among both regions and states in their conduct of key enforcement responsibilities. Such assessments were expected to highlight any major program variations and would be communicated through the issuance of periodic status reports. A number of EPA regional offices were also developing and applying new audit protocols in their state reviews and encouraging more effective communication between and among regional and state enforcement staff. But we also concluded that a number of factors would continue to challenge EPA’s ability to ensure reasonably consistent enforcement across its regions. Among the most important of these factors was the absence of reliable data on how both states and regions are performing their enforcement responsibilities. In 2007, we again examined EPA’s efforts to improve oversight of state enforcement activities. At that time, we reported that EPA had improved its oversight of state enforcement programs by implementing the State Review Framework (SRF). We noted that EPA’s implementation of the SRF gave it the potential to provide for the first time a consistent approach for overseeing authorized states’ compliance and enforcement programs. Nonetheless, we also reported that the SRF had identified several significant weaknesses in how states enforce their environmental laws in accordance with federal requirements. For example, reviews conducted under the framework found that the states were not properly documenting inspection findings or how they calculate or assess penalties, as provided by EPA’s enforcement policy and guidance, that the states were not adequately entering significant violations noted in their inspection reports into EPA databases, and that the states lacked adequate or appropriate penalty authority or policies. While we recognized the value in EPA’s identification and documentation of these findings, we also reported that EPA had not developed a plan for how it would uniformly address them in a timely manner, nor had the agency identified the root causes of the weaknesses, although some EPA and state officials attributed the weaknesses to causes such as increased workloads concomitant with budgetary reductions. We concluded that, until EPA addressed enforcement weaknesses and their causes, it faced limitations in determining whether the states are performing timely and appropriate enforcement, and whether they are applying penalties to environmental violators in a fair and consistent manner within and among the states. In 2000 and in 2007, GAO made several recommendations to EPA to address the concerns that we identified with the agency’s enforcement programs. For example, in 2000, we recommended that EPA develop a comprehensive strategy to adequately address problems with the quality of the agency’s enforcement data and issue guidance to the regions describing the required elements of audit protocols to be used in overseeing state enforcement programs. In 2007, we recommended that to enhance EPA’s oversight of regional and state enforcement activities consistent with federal requirements that the agency should (1) identify lessons learned and develop an action plan to address significant issues, (2) address resource issues such as state staffing levels and resource requirements, (3) publish the results of the SRF reviews so that the public and others will know how well state enforcement programs are working, and (4) conduct a performance assessment of regional enforcement programs similar to the SRF. EPA generally agreed with most of the recommendations we made in 2007, but did not specifically comment on the recommendations we made in 2000. Although EPA has taken steps to address the recommendations in our 2000 report, it has not yet implemented the recommendations in our 2007 report. In 2005, we reported that the scope of EPA’s responsibilities under the Clean Water Act had increased significantly since 1972, along with the workload associated with implementing and enforcing the act’s requirements. For example, EPA’s implementation of the 1987 amendments which expanded the scope of the act by regulating storm water runoff resulted in (1) increasing the number of regulated industrial and municipal facilities by an estimated 186,000 facilities and (2) adding hundreds of thousands of construction projects to states’ and regions’ workloads for the storm water program. At the same time, EPA had authorized states to take on more responsibilities, shifting the agency’s workload from direct implementation to oversight. In 2007, we reported that while overall funding for carrying out enforcement activities to regions and authorized states had increased from fiscal years 1997 through 2006, these increases had not kept pace with inflation and the growth in enforcement responsibilities. Over the 10-year period we reviewed, EPA’s enforcement funding to the regions increased from $288 million in fiscal year 1997 to $322 million in fiscal year 2006, but declined in real terms by 8 percent. Both EPA and state officials told us they found it difficult to respond to new requirements while carrying out their previous responsibilities. In 2007, officials in OECA and EPA’s Office of the Chief Financial Officer told us that in recent years OECA headquarters absorbed decreases in OECA’s total enforcement funding to prevent further reductions to the regions. We determined that enforcement funding for OECA headquarters increased from $197 million in fiscal year 2002 to $200 million in fiscal year 2006—a 9 percent decline in real terms. During the same time, regional enforcement funding increased from $279 million to $322 million—a 4 percent increase in real terms. EPA also reduced the size of the regional enforcement workforce by about 5 percent over the 10 year period between fiscal years 1997 and 2006. During this 10-year period, the regional workforce was reduced from 2,568 full-time equivalent (FTE) staff in fiscal year 1997 to 2,434 FTEs in fiscal year 2006. In comparison, the OECA headquarters workforce declined 1 percent, and the EPA total workforce increased 1 percent during the same period. However, the change in FTEs was not uniform across the 10 regions over the period. For example, two regions—Region 9 (San Francisco) and Region 10 (Seattle)—experienced increases in their workforce: Region 9 increased 5 percent, from 229 to 242 FTEs, and Region 10 increased 6 percent, from 161 to 170 FTEs. In contrast, two regions—Region 1 (Boston) and Region 2 (New York) experienced the largest declines: Region 1 experienced a 15 percent decline, from 195 to 166 FTEs, and Region 2 had a 13 percent decline, from 291 to 254 FTEs. Although we recognized that resources had not kept pace with EPA’s responsibilities under the Clean Water Act, we also found that EPA’s process for budgeting and allocating resources did not fully consider the agency’s current workload, either for specific statutory requirements, such as those included in the Clean Water Act, or for the broader goals and objectives in the agency’s strategic plan. Instead, EPA made incremental adjustments and relied primarily on historical precedent when making resource allocations. In 2005, we concluded that changes at the margin may not be sufficient because both the nature and distribution of the Clean Water Act workload had changed, the scope of activities regulated under the act had increased, and EPA had taken on new responsibilities while shifting others to the state. While we reported in 2005 that EPA had taken some actions to improve resource planning, we also found that it faced a number of challenges that hindered comprehensive reform in this area. Specifically, we identified several efforts that EPA had initiated to improve the agency’s ability to strategically plan its workforce and other resources. While some of these efforts were not directly related to workforce planning, we found that they had the potential to give the agency some of the information it needed to support a systematic, data-driven method for budgeting and allocating resources. In addition, we identified two initiatives within the Office of Water that we believed had the potential to provide relevant and useful information for a data-driven approach to budgeting and allocating resources. First, beginning in December 1998, EPA and the states collaborated on a state resource analysis for water quality management to develop an estimate of the resources that states needed to fully implement the Clean Water Act. The primary focus of the project was identifying the gap between states’ needs and available resources. To develop the estimates of the gap, EPA and the states created a detailed model of activities associated with implementing the Clean Water Act, the average time it took to complete such activities, and the costs of performing them. The National Academy of Public Administration subsequently reviewed the model and determined that the underlying methodology was sound, and recommended that EPA and the states refine the model to support data-driven grant allocation decisions. However, as we reported, the agency did not implement the recommendation, citing resource constraints and reluctance on the part of some states. Second, in 2003, the Office of Water implemented an initiative called the Permitting for Environmental Results Strategy to respond to circumstances that were making it increasingly difficult for EPA and the states to meet their responsibilities under the Clean Water Act. According to EPA, in addition to the scope and complexity of the act expanding over time, the states were also facing an increasing number of lawsuits and petitions to withdraw their authorization to administer some Clean Water Act programs. As part of its effort to identify and resolve performance problems in individual states, EPA and the states were developing profiles containing detailed data on the responsibilities, resources, and workload demands of each state and region. We concluded that this information would be useful to any comprehensive and systematic resource planning method adopted by the agency. Nonetheless, we also identified a number of larger challenges that EPA would face as it tried to adopt a more systematic process for budgeting and resource allocation. Specifically, we found that EPA would be challenged in obtaining complete and reliable data on key workload indicators, which we concluded would be the most significant obstacle to developing a systematic, data-driven approach to resource allocation. Without comprehensive and reliable data on workload, EPA cannot accurately identify where agency resources, such as staff with particular skills, are most needed. EPA officials told us that some of the key workload factors related to controlling point and nonpoint source pollution include the number of point source dischargers, the number of wet weather dischargers, and the quantity and quality of water in particular areas. However, we reported that for some of this information, the relevant databases may not have the comprehensive, accurate, and reliable information that is needed by the agency. Even with better workload data, we found in 2005 that EPA would also find it difficult to implement a systematic, data-driven approach to resource allocation without staff support for such a process. Support might not be easily forthcoming because, according to EPA officials in several offices and regions, staff were reluctant to accept a data-driven approach after their experience in using workload models during the 1980s. At that time, each major program office used a model to allocate resources to the agency’s regional offices. When the models were initially developed, agency officials believed they were useful because EPA’s programs were rapidly expanding as the Congress passed new environmental laws. Over time, however, the expansion of EPA’s responsibilities leveled off, and its impact on the relative workload of regions was not as significant. The change in the rate of the workload expansion, combined with increasingly constrained federal resources during the late 1980s, meant that the workload models were only being used to allocate changes at the margins. The agency stopped using the models in the early 1990s because, according to officials, staff spent an unreasonable amount of time negotiating relatively minor changes in regional resources. To address the concerns that we identified with EPA’s resource allocation and planning processes for the enforcement programs, in 2005, we made several recommendations to the agency. Specifically, we recommended that EPA identify relevant workload indicators that drive resource needs, ensure that relevant data are complete and reliable, and use the results to inform budgeting and resource allocation decisions. In responding to our recommendations, EPA voiced concerns that a bottom-up workload assessment contrasts with its approach, which links budgeting and resource allocation to performance goals and results. However, we reiterated our belief that assessing workload and how it drives resources was fully compatible with EPA’s approach. In 2008, when we again reported on EPA’s resource allocation process, we found that the process was essentially the same as we reported in 2005 and that the agency had not made progress on implementing our recommendations. In 2007, we reported that, despite the interdependence between EPA and the states in carrying out enforcement responsibilities, effective working relationships have historically been difficult to establish and maintain, based on reports by GAO, EPA’s Office of Inspector General, the National Academy of Public Administration, and others. We identified the following three key issues that have affected EPA and state relationships in the past: EPA’s funding allocations to the states did not fully reflect the differences among the states’ enforcement workload and their relative ability to enforce state environmental programs consistent with federal requirements. In this regard, EPA lacked information on the capacity of both the states and EPA’s regions to effectively carry out their enforcement programs, because the agency had done little to assess the overall enforcement workload of the states and regions and the number and skills of people needed to implement enforcement tasks, duties, and responsibilities. Furthermore, the states’ capacity continued to evolve as they assumed a greater role in the day-to- day management of enforcement activities, workload changes occurred as a result of new environmental legislation, new technologies were introduced, and state populations shifted. Problems in EPA’s enforcement planning and priority setting processes resulted in misunderstandings between OECA, regional offices, and the states regarding their respective enforcement roles, responsibilities, and priorities. States raised concerns that EPA sometimes “micromanaged” state programs without explaining its reasons for doing so and often did not adequately consult the states before making decisions affecting them. OECA had not established a consistent national strategy for overseeing states’ enforcement of EPA programs. Consequently, the regional offices were not consistent in how they oversaw the states. Some regional offices conducted more in-depth state reviews than others, and states in these regions raised concerns that their regulated facilities were being held to differing standards of compliance than facilities in states located in other regions. Our 2007 report acknowledged that EPA had made substantial progress in improving priority setting and enforcement planning with states through its system for setting national enforcement priorities and the National Environmental Performance Partnership System (NEPPS), which was designed to give states demonstrating strong environmental performance greater flexibility and autonomy in planning and operating their environmental programs. We concluded that the NEPPS had fostered a more cooperative relationship with the states and that EPA and the states had also made some progress in using NEPPS for joint planning and resource allocation. State participation in the partnership had grown from 6 pilot states in fiscal year 1996 to 41 states in fiscal year 2006. In 2008, we reported that EPA relies on a variety of measures to assess and report on the effectiveness of its civil and criminal enforcement programs. For example, EPA relies on assessed penalties that result from enforcement efforts among its long-standing measurable accomplishments. The agency uses its discretion to estimate the appropriate penalty amount based on individual case circumstances. EPA has developed penalty policies as guidance for determining appropriate penalties in civil administrative cases and referring civil judicial cases. The policies are based on environmental statutes and have an important goal of deterring potential polluters from violating environmental laws and regulations. The purpose of EPA’s penalties is to eliminate the economic benefit a violator gained from noncompliance and to reflect the gravity of the alleged harm to the environment or public health. In addition to penalties, EPA has also established what it considers two major performance measures for its civil enforcement program. These are (1) the value of injunctive relief—the monetary value of future investments necessary for an alleged violator to come into compliance, and (2) pollution reduction––the pounds of pollution to be reduced, treated, or eliminated as a result of an enforcement action. EPA relies on these measures, among others, in pursuing its national enforcement priorities and overall strategy of fewer, but higher impact, cases. However, unless these measures are meaningful, the Congress and the public will not be able to determine the effectiveness of the enforcement program. When we reviewed EPA’s assessed penalties data we determined that from fiscal years 1998 to 2007 total inflation-adjusted penalties declined when excluding major default judgments. When adjusted for inflation, total assessed penalties were approximately $240.6 million in fiscal year 1998 and $137.7 million in 2007. Moreover, we identified three shortcomings in how EPA calculates and reports penalty information to the Congress and the public that may result in an inaccurate assessment of the program. Specifically, we reported that EPA was Overstating the impact of its enforcement programs by reporting penalties assessed against violators rather than actual penalties received by the U.S. Treasury. Reducing the precision of trend analyses by reporting nominal rather than inflation-adjusted penalties, thereby understating past accomplishments. Understating the influence of its enforcement programs by excluding the portion of penalties awarded to states in federal cases. In contrast to penalties, we found that both the value of estimated injunctive relief and the amount of pollution reduction reported by EPA generally increased. The estimated value of injunctive relief increased from $4.4 billion in fiscal year 1999 to $10.9 billion in fiscal year 2007, in 2008 dollars. In addition, estimated pollution reduction commitments amounted to 714 million pounds in fiscal year 2000 and increased to 890 million pounds in fiscal year 2007. However, we identified several shortcomings in how EPA calculates and reports this information as well. We found that generally EPA’s reports did not clearly disclose the following: Annual amounts of injunctive relief and pollution reduction have not yet been achieved. They are based on estimates of relief and reductions to be realized when violators come into compliance. Estimates of the value of injunctive relief are based on case-by-case analyses by EPA’s technical experts, and in some cases the estimates include information provided by the alleged violator. Pollution reduction estimates are understated because the agency calculates pollution reduction for only 1 year at the anticipated time of full compliance, though reductions may occur for many years into the future. In addition, we identified a number of factors that affected EPA’s process for achieving annual results in terms of penalties, estimated value of injunctive relief, and amounts of pollution reduction. Some of these factors that could affect the outcomes included: The Department of Justice (DOJ), not EPA, is primarily responsible for prosecuting and settling civil judicial and criminal enforcement cases. Executive Order 12988 directs DOJ, whenever feasible, to seek settlements before pursuing civil judicial actions against alleged violators. Unclear legal standards, as illustrated by the 2006 Supreme Court decision, Rapanos v. United States have hindered EPA’s enforcement efforts. This case generally made it more difficult for EPA to take enforcement actions because the legal standards for determining what is a “water of the United States” were not clear. In our 2008 report, we recommended that EPA take a number of actions to improve the accuracy and transparency of the information that it reports to the Congress and the public regarding penalties assessed, value of injunctive relief, and estimates of pollution reduction. EPA generally agreed with most of our recommendations and stated that it would consider making these changes in the future. In conclusion, our work over the past 9 years has shown that the Clean Water Act has significantly increased EPA’s and the states’ enforcement responsibilities, available resources have not kept pace with these increased needs, and actions are needed to further strengthen the enforcement program. To address these concerns, we have made several recommendations to EPA, however, EPA’s implementation of our recommendations has been uneven and several of the issues that we have identified over the last decade remain unaddressed today. The agency still needs comprehensive, accurate, and reliable data that would allow it to better target limited resources to those regions and potential pollution problems of the greatest concern. The agency still needs better processes to plan and allocate resources to ensure that the greatest risks are being addressed. Finally, the agency needs accurate and transparent measures to report on whether the Clean Water Act is being consistently implemented across the country in all regions and that like violations are being addressed in the same manner. Mr. Chairman, this concludes our prepared statement, we would be happy to respond to any questions that you or other committee Members might have. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact Anu Mittal at (202) 512- 3841 or mittala@gao.gov. Key contributors to this testimony were Steve Elstein, Diane Raynes, Ed Kratzer, Sherry McDonald, Antoinette Capaccio, and Alison O’Neill. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Congress enacted the Clean Water Act to help reduce water pollution and improve the health of the nation's waterways. The Environmental Protection Agency (EPA) administers its enforcement responsibilities under the act through its Office of Enforcement and Compliance Assurance (OECA), as well as its 10 regional offices and the states. Over the last 9 years, GAO has undertaken a number of reviews of EPA's environmental enforcement activities, including for the Clean Water Act. For this testimony statement, GAO was asked to summarize the results of five prior reports on the effectiveness of EPA's enforcement program. Specifically, this statement includes information on the (1) factors that cause variations in enforcement activities and lead to inconsistencies across regions, (2) impact that inadequate resources and work force planning has had on enforcement, (3) efforts EPA has taken to improve priority planning, and (4) accuracy and transparency of measures of program effectiveness. GAO's prior recommendations have included the need for EPA to collect more complete and reliable data, develop improved guidance, and better performance measures. Although EPA has generally agreed with these recommendations, its implementation has been uneven. GAO is not making new recommendations in this statement. In 2000, GAO found variations among EPA's regional offices in the actions they take to enforce environmental requirements. For example, the regions varied in the inspection coverage of facilities discharging pollutants, the number and type of enforcement actions taken, and the size of the penalties assessed and the criteria used in determining penalties. GAO also found that variations in the regions' strategies for overseeing state programs may have resulted in more in-depth reviews in some regional programs than in others. Several factors contributed to these variations including differences in the philosophical approaches among enforcement staff about how best to achieve compliance with environmental requirements, differences in state laws and enforcement authorities and how the regions respond to these differences, variations in resources available to state and regional offices, the flexibility afforded by EPA policies and guidance that allow latitude in state enforcement programs, and incomplete and inadequate enforcement data that hampered EPA's ability to accurately characterize the extent of variations. In 2007, GAO reported improvements in EPA's oversight of state enforcement activities with the implementation of a state review framework. However, while this framework helped identify several weaknesses in state programs, the agency had not developed a plan for how it would uniformly address these weaknesses or identify the root causes of these weaknesses. In 2005, GAO reported that the scope of EPA's responsibilities under the Clean Water Act along with workload associated with implementing and enforcing the act's requirements had increased significantly. At the same time, EPA had authorized states to take on more responsibilities, shifting the agency's workload from direct implementation to oversight. In 2007, GAO reported that while overall funding for enforcement activities had increased from $288 million in fiscal year 1997 to $322 million in fiscal year 2006, resources had not kept pace with inflation or the increased responsibilities. Both EPA and state officials told GAO that they found it difficult to respond to new requirements while carrying out previous responsibilities and regional offices had reduced enforcement staff by about 5 percent. In 2005, GAO also reported that EPA's process for budgeting and allocating resources did not fully consider the agency's workload, either for specific statutory requirements such as those included in the Clean Water Act or the broader goals and objectives in the agency's strategic plan. Any efforts made by the agency to develop a more systematic process would be hampered by the lack of comprehensive and accurate workload data. In 2007, GAO reported that EPA had made substantial progress in improving priority setting and enforcement planning with states through its system for setting national enforcement priorities and this had fostered a more cooperative relationship with the states. Finally, in 2008, GAO reported that EPA could improve the accuracy and transparency of some of the measures that it uses to assess and report on the effectiveness of its civil and criminal enforcement programs. GAO identified shortcomings in how EPA calculates and reports these data that may prevent the agency from providing Congress and the public with a fair assessment of the programs.
As table 1 shows, at the end of fiscal year 2013, USPS had about $100 billion in unfunded liabilities for pension, retiree health, and workers’ compensation benefits as well as outstanding debt. These unfunded liabilities have increased by 62 percent since fiscal year 2007. Since fiscal year 2007, USPS has experienced significant financial challenges. USPS’s gap between expenses and revenues has grown significantly. In fiscal year 2009, we returned USPS to our high-risk list due, in part, to a projected loss of $7 billion—and an actual loss of over Also, USPS did not make retiree health $8.5 billion—in fiscal year 2010.benefit prefunding payments totaling $16.7 billion due during fiscal years 2011 through 2013. In addition, USPS’s outstanding debt to the U.S. Treasury increased from $2.1 billion at fiscal year-end 2006 to its current statutory borrowing limit of $15 billion.and unfunded liabilities have become a large and growing burden— increasing from 83 percent of USPS’s revenues in fiscal year 2007 to 148 percent of revenues in fiscal year 2013. As shown in figure 1, USPS’s debt USPS’s dire financial condition makes paying for these liabilities highly challenging. In the short term, USPS lacks liquidity to fund needed capital investments and cannot increase its liquidity through borrowing since USPS has hit its $15 billion statutory debt limit. At the end of fiscal year 2013, USPS held unrestricted cash of $2.3 billion, which it states represents approximately 9 days of average daily expenses. This level of liquidity could be insufficient to support operations in the event of another significant downturn in mail volume. In the long term, USPS will be challenged to pay for its unfunded liabilities on a smaller base of First- Class Mail, its most profitable product. First-Class Mail volume has declined 37 percent since it peaked in fiscal year 2001. In addition, USPS’s five-year business plan projects this volume will continue declining by about 5 to 6 percent annually. The extent to which USPS has funded its benefit liabilities varies as a result of different statutory funding requirements specific to each benefit program as well as USPS’s financial means to make funding payments. For example, prefunding of USPS’s pension benefits has been required over decades, and as a result, USPS’s pension liability is over 90 percent funded. Prefunding USPS’s retiree health benefits began in 2007, and at a fairly aggressive pace, and the liability is about half funded at present. In contrast, under the Federal Employees Compensation Act (FECA), USPS funds its workers’ compensation benefits on a pay-as-you-go basis, pursuant to statutory requirements, so the entire FECA liability is unfunded. Also, as discussed further below, the ongoing prefunding requirements—i.e., the rules for calculating the amount that USPS must pay each year—differ among the pension, retiree health, and workers’ compensation programs. For each of the four post-employment benefit programs—Civil Service Retirement System (CSRS), Federal Employees Retirement System (FERS), retiree health, and workers’ compensation—table 2 illustrates, as of the end of fiscal year 2013, USPS’s liability, the value of the assets that have been set aside, the funded percentage, and the unfunded liability. The funded percentages are 91 percent for CSRS, 101 percent (i.e., a slight surplus) for FERS, 49 percent for retiree health, and 0 percent for workers’ compensation. The unfunded liabilities, in order of decreasing size, are $48 billion for retiree health, $19 billion for pensions, and $17 billion for workers’ compensation. These total to about $85 billion, which, together with USPS’s debt to the Treasury of $15 billion, adds to the $100 billion of total debt and unfunded liabilities cited earlier. USPS’s benefit liabilities are actuarial estimates of the present value of a portion of the future benefits projected to be paid under each program based on formulas in current law. Specifically, for both the pension and retiree health programs, the liability includes two pieces: (1) the present value of all projected future benefits for current retirees and their beneficiaries, plus (2) the present value of a portion of the projected future benefits for current employees and their beneficiaries, based on employees’ service to date (with each additional year of service adding to the liability, such that approximately the full liability is accrued when employees reach retirement).employee groups and other stakeholders, these liabilities do not include any amounts for future USPS employees not yet hired or born. The workers’ compensation liability represents the present value of all projected future benefits for former employees who have sustained an injury and are eligible for benefits; it does not include a provision for projected future injuries to current employees. Contrary to statements made by some These liability measurements depend on a combination of economic and demographic assumptions regarding such factors as future investment returns, interest rates, inflation, salary increases, medical costs, and longevity. These liability measurements inherently contain significant degrees of uncertainty, and can change from year to year, both because of actual experience differing from the assumptions and because of changes to the assumptions themselves, which can occur in response to emerging experience and changing conditions. As an example of the sensitivity of these liabilities to changes in assumptions, USPS has estimated that its $48 billion unfunded liability for retiree health benefits could have ranged from $35 billion to $64 billion, solely by varying the inflation rate by 1 percent in either direction. USPS’s pension and retiree health liabilities are estimated using demographic and pay-increase assumptions developed for the federal workforce as a whole, rather than assumptions developed for the USPS workforce in particular. Some have suggested that USPS’s benefit liabilities may be overstated in that the use of USPS-specific assumptions would result in a lower liability measurement.we support using the most accurate numbers possible. We suggested that if USPS-specific assumptions are used, the assumptions should In 2013, we testified that continue to be recommended by an independent body (such as OPM’s Board of Actuaries). USPS’s ongoing prefunding contributions are governed by separate rules applying to the funding of its CSRS, FERS, retiree health benefit, and workers’ compensation liabilities. These separate rules include variations in amortization periods, recognition of any surpluses, use of actuarially determined versus fixed payments, and actuarial assumptions. The Postal Accountability and Enhancement Act (PAEA) eliminated USPS’s agency contributions for CSRS, as the USPS had a CSRS surplus at that time. The surplus of $17 billion was transferred to the new Postal Service Health Benefits Fund (PSHRBF) to begin prefunding USPS’s retiree health liability. Under current law, USPS is not required to make any prefunding contributions for CSRS prior to fiscal year 2017. If USPS were to have an unfunded CSRS liability in 2017 (for example, if the current unfunded CSRS liability of $20 billion persists), USPS would have to begin making prefunding payments to eliminate the unfunded liability by September 30, 2043, i.e., over a 27-year amortization period from fiscal years 2017 to 2043. If USPS were to have a CSRS surplus as of the close of any of the fiscal years ending in 2015, 2025, 2035, and 2039, the CSRS surplus would be transferred to the PSHRBF. For FERS, USPS is annually required to contribute its share of the “normal cost” plus an amortization payment toward any existing unfunded liability. The normal cost is the annual expected growth in the liability attributable to an additional year of employees’ service. The amortization payment toward any unfunded liability is determined using a 30-year amortization period. Since USPS has had a FERS surplus for a number of years, it has not had to make any amortization payments, only its normal cost payments. Current law does not provide any provision for utilization of any FERS surplus, as discussed further in the next section. USPS made FERS normal cost payments of $3.5 billion in fiscal year 2013. Unlike its pension liability, prior to 2007 USPS had been funding its retiree health liability on a pay-as-you-go basis—an approach in which USPS paid its share of premiums for existing retirees, with no prefunding for any future premiums expected to be paid on behalf of current retirees and employees. We have drawn attention to USPS’s retiree health benefit liability over the past decade. In May 2003, the Comptroller General testified that USPS’s accounting treatment—which reflected the pay-as- you-go nature of its funding—did not reflect the economic reality of its legal liability to pay for its retiree health benefits, and that current ratepayers were not paying for the full costs of the services they were receiving. Consequently, the pension benefits being earned by USPS employees—which were being prefunded—were being recovered through current postal rates, but the retiree health benefits of those same employees were not being recognized in rates until after they retired. The Comptroller General testified that without a change, a sharp escalation in postal rates in future years would be necessary to fund the cost of retiree health benefits on a pay-as-you-go basis. In 2006, PAEA established requirements for USPS to begin prefunding its retiree health benefits. USPS stated in its 2007 Annual Report that such prefunding was a “farsighted and responsible action that placed the Postal Service in the vanguard of both the public and private sectors in providing future security for its employees, and augured well for our long- term financial stability,” but also acknowledged that the required payments would be a considerable financial challenge in the near term. PAEA required USPS to make “fixed” prefunding payments to the PSRHBF, ranging from $5.4 billion to $5.8 billion per year, due each fiscal year from 2007 through 2016. three required annual payments. We have referred to these 10 years of required payments as “fixed” because the amounts are specified in statute rather than calculated based on an actuarial measurement of the liability. In addition to these prefunding requirements, USPS is also required to continue paying its share of health benefit premiums for current retirees and their beneficiaries, payments USPS has been making. USPS paid $2.9 billion for its share of retiree health benefit premiums in fiscal year 2013. USPS’s $5.4 billion retiree health benefit prefunding payment due at the end of fiscal year 2009 was reduced to $1.4 billion. Pub. L. No. 111-68, § 164 (Oct. 1, 2009). We reported on USPS’s retiree health prefunding requirements in GAO-13-112. period of just 10 years, as has sometimes been stated. However, we have reported that the required payments are significantly “frontloaded,” in that the total payments required in the first 10 years (fiscal years 2007– 2016) were significantly in excess of estimates of what actuarially determined amounts would be. The Federal Employees’ Compensation Act (FECA) is the workers’ compensation program for federal employees, including USPS. FECA is managed by the Department of Labor (DOL) and provides benefits paid out of the Employees’ Compensation Fund to federal employees who sustained injuries or illnesses while performing federal duties. USPS funds its workers’ compensation under a pay-as-you go system by annually reimbursing DOL for all workers’ compensation benefits paid to or on behalf of postal employees in the previous year. USPS reimbursed DOL $1.4 billion for fiscal year 2013. Without congressional action to address USPS’s benefit funding issues and better align its costs and revenues, USPS faces continuing low liquidity levels, insufficient revenues to make annual prefunding payments, and increasing benefit liabilities. Deferring funding could increase costs for future ratepayers and increase the possibility that USPS may not be able to pay for these costs. USPS stated that in the short term, should circumstances leave the agency with insufficient cash, it would be required to implement contingency plans to ensure that mail delivery continues. These measures could require that USPS prioritize payments to employees and suppliers ahead of some payments to the federal government. For example, as discussed previously, near the end of fiscal year 2011, in order to maintain its liquidity USPS temporarily halted its regular FERS contribution. However, USPS has since made up those missed FERS payments. According to USPS, current projections indicate that it will be unable to make the required $5.7 billion retiree health benefit prefunding payment due in September 2014. USPS has stated that its cash position will worsen in October 2014 when it is required to make an estimated payment of $1.4 billion to DOL for its annual workers’ compensation reimbursement. USPS’s statements about its liquidity raise the issue of whether USPS will need additional financial help to remain operational while it restructures and, more fundamentally, whether it can remain financially self-sustainable in the long term. We have previously reported that Congress and USPS need to reach agreement on a comprehensive package of actions to improve USPS’s financial viability. In previous reports, we have discussed a range of strategies and options, to both reduce costs and enhance revenues, that Congress could consider to better align USPS costs with revenues and address constraints and legal restrictions that limit USPS’s ability to reduce costs and improve efficiency. We have also reported that it is important for USPS to align its expenses and revenues to avoid even greater financial losses, repay its outstanding debt, and increase capital for investments needed to sustain its national network. In addition, we have reported that Congress needs to modify USPS’s retiree health prefunding payment in a fiscally responsible manner, and that USPS should prefund any unfunded retiree health liability to the maximum extent that its finances permit. Implementing strategies and options to better align costs with revenues would better enable USPS to be in a financial position to fund and pay for its debt and unfunded benefit liabilities. With any unfunded liability comes the risk of being unable to pay for it in the future. This risk can be heightened when future revenues are declining or highly uncertain, as is the case for USPS. We have reported on several rationales for prefunding pension and retiree health benefits.Some of the same reasoning could be applied to workers’ compensation benefits as well. The benefits of prefunding include the following: Achieving an equitable allocation of cost over time by paying for retirement benefits during employees’ working years, when such benefits are earned. For USPS, this is about equity between current and future postal ratepayers. This is in line with helping USPS align costs with revenues. An additional consideration here is the “legacy” unfunded liability that was not paid by ratepayers in prior years. Protecting the future viability of the enterprise by not saddling it with bills later after employees have retired. Providing greater benefit security to employees, retirees, and their beneficiaries. Prefunded benefits are more secure against the future risks of benefit cuts or inability to pay. Providing security to any third party that might become responsible in the event of the enterprise’s inability to pay for some or all of the unfunded liability. Prefunding decisions also involve trade-offs between USPS’s current financial condition and its long-term prospects. While reducing unfunded liabilities protects the future viability of the organization, no prefunding approach will be viable unless USPS can make the required payments, but attempting to do so in the short term could further strain its finances. USPS currently lacks liquidity and postal costs would need to decrease or postal revenues to increase, or both, to make required prefunding payments. To the extent prefunding payments are postponed, larger payments will be required later, when they likely would be supported by less First-Class Mail volume and revenue. In 2012, we developed projections of USPS’s future levels of liability and unfunded liability for its retiree health benefits. These projections showed that current law would result in a significant reduction of USPS’s future unfunded liability if USPS resumed making the required payments.However, USPS has indicated that it does not expect to make any of the remaining fixed prefunding payments, through fiscal year 2016, an intention that means its unfunded liability would increase and its future payments would be greater. From the perspective of all USPS’s post-employment benefit programs, any relaxation of funding requirements in the short term—for example, by suspending retiree health prefunding for a period of years—will result in a higher overall unfunded liability for these programs in total. Nonetheless, Congress has to consider the balance between (1) providing USPS with liquidity that provides breathing room in the short term in order to restructure its operations for long-term success, and (2) protecting USPS, its employees and retirees, and other stakeholders in the long term by funding its liabilities for benefits that have already been earned or accrued. It is also important to note that unfunded liabilities can be reduced in either of two ways. An unfunded liability is the difference between the liability and its supporting assets. As such, an unfunded liability can be reduced by increasing the amount of assets (i.e., through prefunding), but it can also be reduced by decreasing the size of the liability, such as by decreasing benefit levels or USPS’s share of such benefit costs, where such a reduction is deemed to be feasible, fair, and appropriate. We have reported on proposals to increase the integration of USPS’s retiree health benefits with Medicare, which would have the effect of reducing USPS’s liability but would also involve other policy considerations. In our prior reports, we have identified funding issues related to USPS’s unfunded liabilities that remain unresolved and have identified potential methods for addressing these issues: Actuarial assumptions: We support making the most accurate measurements possible of USPS’s benefit liabilities, and support the development and use of assumptions specific to USPS’s population of plan participants.assumptions are used, that the assumptions should continue to be recommended by an independent body, such as OPM’s Board of Actuaries. We have suggested that if USPS-specific Fixed versus actuarially determined payments: We have reported that the retiree health prefunding schedule established under PAEA was significantly frontloaded, with total payment requirements through fiscal year 2016 that were significantly in excess of what actuarially determined amounts would be. We added that Congress needs to modify these payments in a fiscally responsible manner. We support proposals to replace the fixed payments with actuarially determined amounts. Funding targets: We have expressed concern about proposals to reduce the ultimate funding target for USPS’s retiree health liability from the current target of 100 percent down to 80 percent. Such a reduction would have the effect of carrying a permanent unfunded liability equal to roughly 20 percent of USPS’s liability, which could be a significant amount. If an 80 percent funding target were implemented because of concerns about USPS’s ability to achieve a 100 percent target level within a particular time frame, an additional policy option to consider could include a schedule to achieve 100 percent funding in a subsequent time period after the 80 percent level is achieved. FERS surplus: Under current law, USPS’s payments to FERS increase, appropriately, when USPS has an unfunded FERS liability, but USPS realizes no financial benefit when it has a FERS surplus. We have reported that we would support a remedy to this asymmetric treatment, but we have reported on important trade-offs to consider for different types of remedies. While the most recent estimate shows a relatively small FERS surplus for USPS—an estimated $0.5 billion—USPS has stated that it believes its FERS surplus would have been substantially larger if its FERS liability had been estimated using postal-specific demographic and pay increase assumptions. A conservative approach to permit USPS to access any FERS surplus would be to use it to reduce USPS’s annual FERS contribution by amortizing the surplus over 30 years (which would mirror the legally required treatment of an unfunded FERS liability). Another approach would be to reduce USPS’s annual FERS contribution by offsetting it against the full amount of surplus each year until the surplus is used up; this would be comparable to what occurs for private-sector pension plans. We have previously suggested that any return of the entire surplus all at once should be done with care, given the inherent uncertainty of the estimated liability and the existence of USPS’s other unfunded liabilities.should be considered as a one-time exigent action and only as part of a larger package of postal reforms and restructurings. Any provision that would return a surplus whenever one developed would likely eventually result in an unfunded liability. A one-time-only return of the entire surplus In conclusion, we again emphasize that deferring funding liabilities in benefit programs could increase costs for future ratepayers and increase the possibility that USPS may not be able to pay for its benefit costs, and that USPS should work to reduce its unfunded liabilities to the maximum extent that its finances permit. Ultimately, however, the viability of funding promised benefits depends on the financial viability of USPS’s underlying business model. We continue to recommend that Congress adopt a comprehensive package of actions that will facilitate USPS’s ability to align costs with revenues based on changes in the workload and the use of mail. Chairman Farenthold, Ranking Member Lynch, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. For further information about this statement, please contact Frank Todisco, Chief Actuary, FSA, MAAA, EA, Applied Research and Methods, at (202) 512-2834 or todiscof@gao.gov. Mr. Todisco meets the qualification standards of the American Academy of Actuaries to render the actuarial opinions contained in this testimony. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. In addition to the contact named above, Lorelei St. James, Director, Physical Infrastructure Issues; Teresa Anderson; Samer Abbas; Lauren Fassler; Thanh Lu; and Crystal Wesco made important contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
USPS continues to be in a serious financial crisis, with insufficient revenue to cover its expenses and financial obligations as the volume of USPS's most profitable product, First-Class Mail, continues to decline. At the end of fiscal year 2013, USPS had about $100 billion in unfunded liabilities: $85 billion in unfunded liabilities for benefits, including retiree- health, pension, and workers' compensation liabilities, and $15 billion in outstanding debt to the U.S. Treasury—the statutory limit. These unfunded liabilities are a large and growing financial burden, increasing from 83 percent of USPS revenues in fiscal year 2007 to 148 percent of revenues in fiscal year 2013. Unfunded benefit liabilities represent estimated future benefit payments to current and retired employees for which USPS has not set aside sufficient money to pay. This testimony discusses (1) the extent to which USPS's benefit liabilities are unfunded and (2) the potential impacts of USPS's unfunded benefit liabilities absent action by Congress to address them and key policy issues for consideration. This testimony is based primarily on GAO's work over the past 4 years and updated USPS financial information for fiscal year 2013. GAO has previously reported that a comprehensive package of legislative actions is needed so that USPS can achieve financial viability and assure adequate benefits funding for more than 1 million postal employees and retirees. GAO has also previously identified various approaches Congress could consider to restructure the funding of USPS retiree health benefits and pensions. The extent to which the U.S. Postal Service (USPS) has funded its liabilities varies due to different statutory funding requirements specific to each benefit program and USPS's financial means to make payments. For example, USPS has been required to prefund its pension benefit liability over decades, and as shown in the table below, its pension liability is 94 percent funded. Prefunding USPS's retiree health benefits began in 2007, and the liability is about half funded. In contrast, USPS funds its workers' compensation benefits on a pay-as-you-go basis, and the entire liability is unfunded. The largest unfunded liabilities, in order of decreasing size, are $48 billion for retiree health, $19 billion for pensions, and $17 billion for workers' compensation. The rules for calculating the amount that USPS must fund each year differ among the pension and retiree health programs, including variations in amortization periods, recognition of any surpluses, use of actuarially determined versus fixed payments, and actuarial assumptions. Reasons for prefunding include fairly allocating costs between current and future ratepayers, protecting USPS's future viability, providing greater benefit security to employees and retirees, and protecting potential third parties. Prefunding decisions involve trade-offs between USPS's current financial condition and its long-term prospects. Congress needs to modify USPS's retiree health prefunding payments in a fiscally responsible manner, and USPS should prefund any unfunded retiree- health benefits liability to the maximum extent that its finances permit. Lowering the retiree health funding target from 100 to 80 percent would have the effect of carrying a permanent unfunded liability. USPS liabilities are estimated using assumptions for the federal workforce as a whole, rather than USPS-specific assumptions. GAO supports the use of the most accurate actuarial assumptions available, and if USPS-specific assumptions are used, that they be recommended by an independent body.
For decades, oil has been relatively inexpensive and plentiful, helping to spur the United States’ economic growth. Despite price spikes primarily caused by instability in the Middle East and other oil-producing regions or by natural disasters, the price of oil has historically returned to low levels. However, in recent years, increasing world consumption of oil has put more upward pressure on the price of oil, making the price less likely to return to low levels. Figure 1 shows the volatility of the oil market because of political instability and natural disasters, but also illustrates an upward trend in price in recent years. In 2005, the world consumed about 84 million barrels of oil per day, and world oil production has been running at near capacity to meet the growing demand. DOE’s Energy Information Administration projects that world oil consumption will continue to grow, reaching about 118 million barrels per day in 2030. In February 2007, we reported that most studies, amidst much uncertainty, estimate that oil production will peak sometime between now and 2040, which could lead to rapid increases in oil prices. We concluded that the United States—which consumes about one-quarter of the world’s oil and is about 97 percent dependent on oil for transportation—would be particularly vulnerable to the projected price increases. Fuel cells convert the chemical energy in hydrogen—or a hydrogen-rich fuel—and oxygen to create electricity with low environmental impact. Although fuel cells can use a variety of fuels, hydrogen is preferred because of the ease with which it can be converted to electricity and its ability to combine with oxygen to emit only water and heat. Fuel cells look and function very similar to batteries. However, for a battery, all the energy available is stored within the battery and its performance will decline as its fuel is depleted. A fuel cell, on the other hand, continues to convert chemical energy to electricity as long as fuel is fed into the fuel cell. Like a battery, a typical fuel cell consists of an electrolyte—a conductive medium—and an anode and a cathode sandwiched between plates to generate an electrochemical reaction. (See fig. 2.) Like the respective negative and positive sides of a battery, the current flows into the anode and out of the cathode. Fuel cells typically are classified according to their type of electrolyte and fuel. Table 1 identifies the various types of fuel cells and their uses. NASA began conducting R&D on hydrogen and fuel cells in the 1960s to develop a simple alkaline fuel cell for the space program. However, alkaline fuel cells do not work well for cars, in part because of their propensity to be damaged by carbon dioxide. In response to the 1973 oil embargo, the federal government began conducting R&D to improve automobile efficiency and reduce the U.S. transportation sector’s dependence on oil by developing technologies for using alternative fuels, including (1) ethanol from corn and other biomass, (2) synthetic liquids from shale oil and liquefied coal, and (3) hydrogen directly used in internal combustion engines. In 1977, DOE’s Los Alamos National Laboratory began R&D on fuel cells called polymer electrolyte membrane or proton exchange membrane, which have a low operating temperature, need only hydrogen and oxygen from the air, and are very efficient. However, DOE and industry reduced R&D funding for alternative fuels during the 1980s, when crude oil prices returned to historic levels. DOE formed (1) an R&D partnership with the U.S. Council for Automotive Research (USCAR) In 1993, DOE and USCAR formed the Partnership for a New Generation of Vehicles to (1) improve competitiveness in vehicle manufacturing, (2) implement commercially viable innovations, and (3) develop vehicles with up to three times the fuel efficiency of comparable 1994 family sedans. demonstrate and deploy other types of fuel cells for stationary and portable applications. DOE further focused its hydrogen R&D in response to the National Energy Policy issued in 2001, which highlighted hydrogen as one of several R&D priorities. DOE hosted several meetings and workshops, including two major workshops in 2001 and 2002 that were designed to develop an R&D agenda and involved stakeholders from industry, universities, environmental organizations, federal and state agencies, and national laboratories. These meetings and workshops laid the groundwork for identifying a common R&D vision and challenges, and each DOE program has used meetings and workshops to develop separate detailed R&D plans that set near-term and long-term targets to enable commercialization decisions by 2015. In February 2004, DOE integrated these plans into its first Hydrogen Posture Plan, a single high-level agenda. The Hydrogen Posture Plan’s approach is to conduct R&D in multiple pathways within key technology areas with the intent of providing several promising options for industry to consider commercializing. For example, DOE is using a mix of fossil, renewable, and nuclear energy to develop and demonstrate technologies that can extract hydrogen from a variety of sources, including natural gas, coal, biomass, water, algae, and microbes. DOE officials state that they prioritize the most promising technologies and terminate specific efforts that show little potential. Based on its review of the posture plan, the National Academy of Engineering made 48 recommendations, most of which were incorporated by DOE, including focusing on both applied and fundamental science R&D. In addition to the R&D funded through the Hydrogen Fuel Initiative, DOE conducts R&D on various other hydrogen-related technologies. For example, the Office of Fossil Energy is working on a hydrogen-based solid oxide fuel cell, with funding provided through the Solid State Energy Conversion Alliance, for stationary applications of electricity generation. Fossil Energy’s R&D plan for extracting hydrogen from coal complements a separately funded demonstration program called FutureGen. The effort is designed to construct a prototype integrated gasification combined- cycle coal power plant to be operational by 2015 that will demonstrate production of hydrogen as well as reduced emissions. Fossil Energy also funds R&D on the capture and sequestration of carbon dioxide, considered an important area of R&D if coal is to be used as a long-term source of hydrogen. The Office of Nuclear Energy’s R&D plan for producing hydrogen-using nuclear energy—called the Nuclear Hydrogen Initiative— complements the separately funded Next Generation Nuclear Plant program. The effort focuses on conducting R&D on a new generation of nuclear power plants capable of producing large amounts of hydrogen efficiently and economically. The first prototype is scheduled to be operational between 2018 and 2021. DOE’s hydrogen R&D program has made important progress, but some target dates have been pushed back, and further progress in certain areas will require significant scientific advances and continued R&D beyond 2015. Specifically, during its first 4 years, the Hydrogen Fuel Initiative has achieved such targets as reducing the cost of extracting hydrogen from natural gas, but other target dates have slipped as a result of technical challenges and budget constraints. For example, DOE officials and industry representatives stated that achieving targets for hydrogen storage will require fundamental breakthroughs, while achieving targets for other technologies will require significant scientific advances and cost reductions. However, DOE has not updated its 2006 Hydrogen Posture Plan’s overall assessment of what the department reasonably expects to achieve by its technology readiness date in 2015 and its anticipated R&D funding needs to meet the 2015 target. Furthermore, full-scale deployment of hydrogen technologies will require sustained industry and federal investment, possibly for decades beyond 2015, to develop supporting infrastructure. According to DOE, key R&D targets to achieve technology readiness in 2015 focus primarily on (1) extracting hydrogen from diverse, domestic resources at a cost equivalent to about $2 to $3 per gallon of gasoline, (2) storing hydrogen on-board vehicles to enable a driving range of at least 300 miles for most light duty vehicles, (3) delivering hydrogen between two points for less than $1 per kilogram, and (4) developing proton exchange membrane fuel cells that cost about $30 per kilowatt and deliver at least 5,000 hours of service for vehicles—which compares to about 150,000 miles in conventional gasoline-powered vehicles—and at least 40,000 hours for stationary applications. As shown in table 2, DOE has made progress on meeting some of its near-term targets, in both applied and fundamental science, important stepping stones for meeting DOE’s 2015 targets. For hydrogen to compete with gasoline, DOE must be able to produce hydrogen at prices that approximate the cost of gasoline. Specifically, in the near term, DOE must extract hydrogen from natural gas at a cost of $2 to $3 per gallon of gasoline equivalent and, in the longer term, develop biomass and biomass-derived liquids at similar costs or, for large centralized production facilities, at costs less than $2 per gallon of gasoline equivalent. DOE has established targets of less than $2 per gallon of gasoline equivalent for extracting hydrogen from water using wind energy and from coal using coal energy. The latter technology must also demonstrate carbon capture and sequestration. Other technologies being explored include producing hydrogen from biological, photoelectrochemical, and nuclear processes, but are long-term efforts. Technologies for extracting hydrogen from diverse sources generally are known and usually involve heat or chemical processes to separate hydrogen from various compounds. DOE reported that it has met its target of extracting hydrogen from natural gas through a process called steam reformation, reducing cost to less than $3 per gallon of gasoline equivalent, nearly one-half of the $5 per gallon of gasoline equivalent that industry had achieved in 2003. As a result, DOE has begun to phase out R&D in steam reformation of natural gas and plans to focus its resources in higher priority areas, leaving industry to continue to refine the steam reformation process and reduce its cost. DOE, however, has pushed back its target dates for extracting hydrogen from biomass and water using wind energy from 2015 to 2017. Specifically, DOE is conducting research on reducing the cost of extracting hydrogen from biomass-derived liquids such as ethanol, but the cost of producing ethanol is still too high to make the technologies competitive. DOE also is developing technologies to cost efficiently extract hydrogen from biomass using a gasification process. Gasification involves heating the biomass to a temperature high enough to separate the hydrogen, but the gasification technologies do not yet meet cost targets. DOE’s Office of Fossil Energy leads the effort for extracting hydrogen from coal—also using a gasification technology—and has made progress in developing membranes that can separate hydrogen in the 500 to 900 degrees Fahrenheit gasification process. The R&D effort complements Fossil Energy’s FutureGen program, which is scheduled to have a 275-megawatt demonstration plant operational by 2015. DOE’s Office of Nuclear Energy leads the effort to use nuclear energy to produce hydrogen, primarily from water. These R&D efforts involve development of a new generation of nuclear reactors that are more efficient and operate at very high temperatures. The Office of Nuclear Energy reports that an engineering-scale demonstration effort for hydrogen production has been pushed back from 2017 to between 2018 and 2021. Because steam reformation of natural gas reflects the most mature technology, natural gas is expected to be the primary source of hydrogen through the next 20 years. However, extracting hydrogen from natural gas will simply substitute one fossil fuel for another with similar vulnerabilities to supply disruptions and adverse environmental effects. In the long term, DOE is developing technologies that rely on renewable or nuclear energy from non-carbon-producing sources. DOE officials noted that although the R&D efforts do not require fundamental advances in science, they generally acknowledge that developing the technologies will take years of applied scientific effort before costs can be reduced enough to be competitive with gasoline. One challenge, for example, is minimizing carbon or sulfur impurities when extracting hydrogen from coal. Impurities can shorten the life-span of the separation membranes used in the gasification process and can also impact the life span and performance of fuel cells. Although higher-temperature stationary fuel cells—such as solid oxide fuel cells operating at temperatures exceeding 1,200 degrees Fahrenheit—are more tolerant of impurities, lower temperature proton exchange membrane vehicle fuel cells begin to fail when impurities are present. For hydrogen fuel cell vehicles to compete with conventional gasoline vehicles, DOE must develop technologies to store enough hydrogen on board the vehicle to achieve a driving range of at least 300 miles without compromising passenger or cargo space and while meeting all consumer expectations for performance, safety, refueling ease, and cost. In addition, DOE must develop technologies to store and dispense enough hydrogen at fueling stations to meet consumer needs. None of the current technologies have attained these requirements, and none is likely to do so without fundamental scientific breakthroughs, according to DOE officials and industry representatives. Although on a weight basis, hydrogen has almost three times the energy content of gasoline, it has almost four times less energy than gasoline on a volume basis. This means DOE must store a much larger amount of hydrogen within specified space constraints than gasoline to obtain equivalent amounts of energy, raising the technical challenges and the cost. Currently, hydrogen is most commonly stored as a gas, compressed under high pressure, or is super-cooled to a liquid, but neither technology is likely to meet DOE’s 2015 performance and cost targets. For example, hydrogen can currently be compressed to 10,000 pounds per square inch, which is about the highest level of compression being considered because of safety and cost concerns, yet this method stores less than half the hydrogen necessary and is more than nine times the cost needed to meet DOE’s 2015 performance and cost targets. Similarly, liquid hydrogen, which must be cryogenically maintained at negative 423 degrees Fahrenheit, typically requires about one-third of its energy content to liquefy the hydrogen. Storing hydrogen in its denser liquid form has a higher storage capacity than compressed hydrogen, but there are challenges related to keeping the hydrogen insulated and losing some hydrogen due to evaporation. Scientists at Los Alamos National Laboratory succeeded in developing materials that have the potential to meet DOE’s 2010 technical targets for chemically storing hydrogen, although it is not clear if the materials will meet cost targets. The scientists used a liquid boron-based compound to bind the hydrogen. Boron, from which the household cleaner borax is derived, readily forms compounds with other chemicals and can be recycled for reuse. The compound binds and releases hydrogen and, in liquid form, can also be used to transport hydrogen through pipelines or in trucks. The National Renewable Energy Laboratory has also made significant progress in developing new nanostructure materials. Scientists have designed these materials with pores at the nanometer scale to resemble globes with many branches or foam structures pocked with holes to significantly increase the surface area on which to bind hydrogen. Recent efforts include manufacturing the nanostructures with boron or calcium compounds, both of which bind and release hydrogen. Likewise, scientists at Sandia National Laboratories have also made progress, improving storage of hydrogen by 50 percent between 2004 and 2006 by developing new materials that absorb hydrogen. DOE is continuing R&D in compression and liquefaction of hydrogen, in particular, because DOE contends that these technologies will be important for early market penetration. However, for commercial scale deployment of hydrogen technologies, DOE officials and industry representatives agree that an alternative storage method must be found. DOE’s R&D focus is on developing new materials that can store hydrogen without requiring high pressures or cryogenic temperatures. These areas focus on developing new materials that can store hydrogen on the surface of a material—called “adsorption;” absorb the hydrogen into a material; or bind the hydrogen within a chemical compound. Adsorption and absorption R&D typically involve nanotechnology to develop new materials structured to increase surface area. Chemical storage of hydrogen has additional challenges, including processing centers that would be needed to bind and release hydrogen from the chemical carrier before the hydrogen can be used by consumers, raising the overall costs. In the last few years, a number of materials have been developed, but not within the energy, temperature, or cost required for commercial scale deployment. Successful commercialization of hydrogen fuel cell technologies— particularly hydrogen fuel cell vehicles—will depend upon a hydrogen delivery infrastructure that provides the same level of safety, convenience, and functionality as the existing gasoline delivery infrastructure. The delivery infrastructure will initially need to support hydrogen production at small facilities distributed throughout the country and, eventually, larger centralized facilities. The delivery infrastructure includes operations at the refueling site itself, such as compression, storage, and dispensing, as well as the actual delivery of hydrogen. DOE developed its 2015 targets with significant input from industry. Specifically, DOE used a sophisticated model for estimating hydrogen delivery costs for a city the size of Indianapolis with 50 percent of the vehicles being hydrogen fuel cell vehicles and with central production of hydrogen located 60 miles from the city’s edge. DOE determined that the cost of delivering hydrogen to fueling stations must be less than $1 per gallon of gasoline equivalent. This cost includes operations at the delivery site, such as transferring the hydrogen to storage or dispensing equipment. To put DOE’s R&D requirements in perspective, the cost of delivering gasoline from a Gulf Coast refinery to a fuel pump in Dallas, Texas, has been estimated at about $0.18 per gallon. Currently, hydrogen is delivered by truck as a liquid or gas or by a modest pipeline infrastructure, but at delivery costs mostly ranging from $4 to $9 per gallon of gasoline equivalent, significant advances must be made to reduce costs to meet DOE’s targets. Hydrogen is difficult to deliver economically using conventional methods because the hydrogen atom is small and diffuses rapidly, making it difficult to design equipment to prevent leakage. Hydrogen can also corrode the steel used in pipes and trucks, which make up the bulk of current conventional delivery systems. Trucks can carry about 10 times more liquid hydrogen than gaseous hydrogen, but since liquefying hydrogen requires so much energy, hydrogen generally is delivered in gaseous form by truck for distances less than 200 miles and in liquid form for greater distances. In addition, about 630 miles of pipelines currently deliver hydrogen, primarily located near oil refineries mostly along the Gulf Coast where hydrogen predominantly is used. This infrastructure is modest compared to the over 1.5 million miles of pipelines that already deliver natural gas, oil, and other petroleum-related products in the country. Although these pipelines meet the specific hydrogen needs of industry, they must be operated at a constant pressure and they cost on the order of $1 million per mile. Moreover, hydrogen causes brittleness in pipelines, raising concerns about using current materials to build a larger hydrogen pipeline infrastructure, particularly where line pressures may vary. DOE’s priorities in R&D focus on reducing costs for delivering hydrogen in liquid form by truck, in gas form by pipeline, and by binding the hydrogen to a chemical carrier. Specifically, DOE is continuing its R&D on cryogenic liquefaction of hydrogen to decrease costs and encourage near-term deployment of hydrogen technologies. DOE is also conducting R&D to develop new composite materials for pipes or to develop pipe liners to prevent leaks and pipe failures due to embrittlement. Brittleness in pipes carrying hydrogen is not well understood, and some R&D efforts focus on understanding hydrogen’s reaction with pipe materials. Once hydrogen technology deployment reaches commercial scale, pipelines provide the lowest cost delivery option. DOE is also researching the potential for delivering hydrogen in chemical form by binding hydrogen to various chemical compounds, alleviating the need for cryogenic liquefaction of hydrogen and improving delivery through pipelines. The chemical compounds include liquids and solids, as well as powders that could flow through pipelines. DOE’s R&D focuses on a carrier that could substantially increase carrying capacity of hydrogen for more economic delivery through conventional delivery systems, such as pipelines and trucks. However, no chemical carrier has yet been identified that has the optimal combination of high carrying capacity and low energy requirements for binding and releasing hydrogen. Additional R&D focuses on purifying hydrogen that has been transported, since impurities may reduce the life span and operating efficiency of fuel cells. To be competitive, vehicle fuel cells must have a similar life-span and similar vehicle packaging requirements and be able to operate in the same conditions as gasoline-powered engines. Specifically, vehicle fuel cells must have a life span of about 5,000 hours—equivalent to about 150,000 miles of vehicle travel. Furthermore, fuel cells must be able to operate in environments with temperatures ranging from minus 40 degrees to 104 degrees Fahrenheit and must be able to start up quickly at low temperatures with minimal energy consumption. In addition, the cost of commercial scale production of vehicle fuel cells must drop from the current $107 per kilowatt to $30 per kilowatt––nearly a quarter of the current cost––to meet DOE’s 2015 target. Stationary fuel cells must have a longer life span than those for vehicles, up to 40,000 hours, equivalent to about 4.5 years of continuous operation. In the early 1990s, DOE estimated the cost of manufacturing fuel cells at high volume to be about $3,000 per kilowatt. Since then, DOE’s focus has been on materials that can reduce costs at high volume. DOE succeeded in reducing manufacturing costs at high volume to $175 per kilowatt in 2004 and about $107 per kilowatt in 2006. The cost reductions have been achieved primarily by reducing the amount of platinum required as a catalyst and developing less expensive membranes. DOE is just beginning to focus R&D efforts on improving processes for commercial scale manufacture of fuel cell components. In particular, DOE has announced its intention to fund R&D for commercial scale manufacture of fuel cells for stationary applications. DOE has achieved a life span of about 1,600 hours for vehicle fuel cells, but has not yet demonstrated start-up from sub-freezing temperatures. In addition, although DOE has reduced the cost of fuel cells, significant gains in cost remain to be achieved, in part, because fuel cells rely on platinum catalysts. Platinum, which is in high demand primarily for use in catalytic converters for automobiles and as jewelry, is the only catalyst that can sufficiently generate enough power at low operating temperatures to operate a vehicle. To reduce the cost of fuel cells, DOE’s target focuses on decreasing the amount of platinum used in 2005 by more than 80 percent in 2015. DOE officials noted that Los Alamos National Laboratory has succeeded in reducing platinum requirements and improving performance of fuel cells, but they also noted that reliance on the current amount of platinum—considering its rising costs—poses significant challenges to reducing the costs enough to meet the 2015 targets. In addition, DOE has not yet met the size and weight packaging requirements of the automobile manufacturers for fuel cells. Complex equipment, such as heat exchangers and humidifiers, must be added to the fuel cell to keep it operating at its current 140 to 176 degrees Fahrenheit in a controlled environment of 80 to 100 percent relative humidity. Furthermore, impurities in the hydrogen fuel stream, such as sulfur compounds and carbon monoxide, reduce the performance of the fuel cell. Removing or managing the impurities raises overall costs. Regarding R&D for fuel cells for stationary applications, DOE has demonstrated a life span of about 20,000 hours, about one-half the life span required to meet DOE’s targets. DOE’s fuel cell R&D focuses on reducing costs and improving durability by (1) developing alloys that contain less platinum, (2) developing substitutes for platinum, and (3) developing fuel cells that operate at slightly higher temperatures and lower relative humidity to reduce complex equipment and increase tolerance to impurities. More specifically, DOE is conducting R&D to develop new electrodes for fuel cells that can be manufactured with less platinum, but can increase durability. DOE is also pursuing R&D on developing less-expensive, better performing substitutes for platinum, but DOE has not yet found a substitute that matches the performance of platinum, particularly in terms of achieving the power needed to operate a fuel cell vehicle. In addition, DOE has recently focused R&D on developing fuel cells that operate at 248 degrees Fahrenheit and lower relative humidity to reduce or eliminate complex equipment and increase tolerance to impurities. DOE has not yet developed new materials that meet these characteristics. Fuel cells for stationary applications generally do not have the same weight and size restrictions as for vehicle applications, nor do they have the same rapid fluctuation in power demand as vehicles, but they do have similar issues regarding cost and durability. DOE has made important progress in many areas of R&D, but some target dates have been pushed back, primarily as a result of technical challenges and budget constraints, according to DOE officials. Although some industry representatives believe that having ambitious targets is good, they noted that the target dates for certain technologies are very ambitious, particularly given the requirements of incorporating the technology into an integrated system that can be commercially deployed in a real-world environment. For example, although DOE has demonstrated considerable progress in developing new materials for storing hydrogen, the current materials being investigated operate in temperatures ranging from minus 300 degrees Fahrenheit to more than 700 degrees Fahrenheit. Of these, only a few fall within DOE’s much more narrow target range for operating temperatures and none meet DOE’s cost targets. Table 3 shows that funding for the Hydrogen Fuel Initiative totaled nearly $1.2 billion for fiscal years 2004 through 2008. Some HTAC and industry representatives believe that $1.2 billion over 5 years is insufficient to meet DOE’s ambitious technical and cost targets. Furthermore, congressionally directed projects—primarily for activities outside the initiative’s R&D scope—accounted for almost 25 percent of the Hydrogen Fuel Initiative’s budget for fiscal years 2004 through 2006. In response to both budget constraints and technical challenges, DOE has pushed back target dates for certain key technologies—the target date for using wind energy to produce hydrogen was pushed back from 2015 to 2017—and reduced funding for stationary and portable applications that might, through early penetration in small markets, resolve technical issues and stimulate public acceptance of hydrogen vehicles. However, DOE’s hydrogen program manager expressed confidence that DOE remains on schedule for the higher priority targets. Nevertheless, because some target dates have been pushed back 2 or more years, what DOE currently projects for technology readiness in 2015 differs from its original set of expectations laid out in the 2004 Hydrogen Posture Plan. DOE has not updated its 2006 posture plan for the Congress and industry to more clearly identify what technologies will be ready for industry to consider when making commercialization decisions in 2015, nor has it projected anticipated costs to achieve technology readiness. For example, because some target dates have slipped 2 or more years, the cost of meeting some of the technical targets may exceed DOE’s original planned estimates. However, DOE has not updated estimates of the funding needed to achieve its technology readiness target in 2015. DOE’s Office of Energy Efficiency and Renewable Energy projects that its hydrogen R&D budget will total $750 million for fiscal years 2009 through 2012. DOE officials and industry representatives told us that R&D will need to continue beyond 2015 because some interim target dates have been pushed back. Furthermore, they said that even after the initial technical targets are met, R&D will need to continue well beyond 2015 to further refine and sustain the developing hydrogen technologies. DOE officials noted that they had always planned to conduct R&D beyond the 2015 target date. The officials pointed out that DOE is still conducting R&D to improve conventional gasoline engines, even though the engines have been in use for over 100 years, and that they always have been planning to do the same for hydrogen technologies. Industry would have to match the convenience of the conventional infrastructure to compete with conventional technologies on a commercial scale, particularly gasoline vehicles, requiring investments of tens of billions of dollars that will most likely take decades to accomplish. To meet the production of hydrogen if fuel cell vehicles replaced an estimated 300 million gasoline vehicles, DOE reports that over 70 million tons of hydrogen would need to be extracted from various sources each year, requiring the construction of new production facilities throughout the country. Currently, the United States has approximately 132 operating refineries and 1,300 petroleum product terminals that deliver petroleum products to more than 167,000 retail service stations, truck stops, and marinas located throughout the country. Typical gasoline stations dispense about 1,500 gallons of gasoline each day, but store several times that amount on site, usually in underground tanks. DOE officials acknowledged that investments in a hydrogen infrastructure would be considerable, but noted that the gasoline infrastructure also required investments of tens of billions of dollars and took decades to develop. Currently, U.S. industries produce over 9 million tons of hydrogen annually, primarily to refine petroleum, manufacture fertilizer, and process foods, most of which are produced near end-use along the Gulf Coast and in California to avoid the high cost of delivery. Current production reflects about one-eighth of the projected need and most of it is localized in specific areas. Facilities capable of extracting hydrogen economically will have to be constructed throughout the country. Some of these facilities could be co-located with existing gasoline fueling stations, but some stations have spatial limitations that raise challenges of using them. Also, the current cost of delivering hydrogen does not meet cost targets and cannot compete with the gasoline infrastructure. Although pipelines represent more attractive economics for delivering hydrogen than delivery by truck at high market penetration, they reflect high initial capital investments, estimated at about $1 million per mile. One industry official estimated that building new pipelines along interstate highways capable of serving about 75 percent of the U.S. population would cost approximately $14 billion, assuming there would be no barriers prohibiting the effort. The development and use of carriers may allow use of the existing pipeline infrastructure and may also resolve some embrittlement concerns, but such carriers also raise other technical and cost challenges, such as storage and recycling of the chemical carriers. For example, existing gasoline stations—already stretched for space—could face additional challenges if equipment were needed on site to separate the hydrogen from a chemical carrier, purify the hydrogen, and store the chemical carrier so it can be returned to a central facility for recycling. Although new fueling stations could be constructed, industry has estimated the construction of new fueling stations at about $1 to $2 million each. In addition, other issues, such as safety codes and standards, may impact investment decisions. For example, one industry representative noted that safety concerns among local approving officials, among other things, may prevent some conventional hydrogen storage systems from being buried underground, as is common with gasoline tanks. The National Hydrogen Association also reports that industry must put a lot of energy and resources into educating local officials on codes and standards involving hydrogen-related technologies. Even if hydrogen-related technologies are approved, they often carry a cost premium. For example, typical gasoline dispensing nozzles cost about $40 to $110, but hydrogen dispensing nozzles currently cost about $4,000 each. Some high costs could be expected to drop with high-volume manufacturing and competition. DOE officials and industry representatives also acknowledged the high degree of risk for investors, noting that there are other near-term and mid- term options for stationary and vehicle energy technologies. They speculated that transitioning to hydrogen fuel cell technologies will most likely start small, in localized markets relying on the current infrastructure to minimize risk. For example, fuel cell vehicles might start in cities such as Los Angeles or New York, but within limited areas where there is a supporting infrastructure. They agreed that broader expansion of hydrogen fuel cell technologies into the market would likely cost investors tens of billions of dollars in infrastructure costs and will take decades. Several energy companies and electric utilities told us that they were unlikely to invest in the hydrogen infrastructure in the near term because of the high cost and high risk and, although they expressed interest in investing in the long term, they did not have definitive plans about what investments they might make. Nonetheless, DOE officials and industry representatives stated that transitioning to hydrogen technologies will require a sustained commitment by both industry and the federal government. For example, industry representatives stated that federal tax credits for fuel cell technologies have been authorized for only a few years at a time—too short for industry to consider when making long-term investment decisions. To better understand real-world infrastructure challenges in transitioning to hydrogen fuel cell technologies, DOE has several ongoing demonstration projects and modeling analyses. The primary goal of the technology validation effort is to demonstrate complete, integrated systems in a real-world environment. Although individual components may meet DOE’s performance targets, the complete system may not function as intended because of integration problems or unanticipated real-world operating conditions. DOE’s Controlled Hydrogen Fleet and Infrastructure Demonstration and Validation Project, which has paired auto companies with energy companies, in 2007 is testing 77 hydrogen fuel cell vehicles and 14 hydrogen refueling stations in real-world conditions around the country to evaluate performance in different climates and usage patterns. The demonstration project is expected to grow to 130 hydrogen fuel cell vehicles and at least 18 hydrogen fueling stations in 2008. Individuals drive the hydrogen fuel cell vehicles as they would a gasoline vehicle: to work or to the store and fill up their vehicles at hydrogen fueling stations. Using information from this demonstration project and from sophisticated modeling analyses, DOE officials and industry representatives reported that the initial deployment of hydrogen technologies in the market will most likely rely on technologies that do not require a new infrastructure. Specifically, they noted that natural gas—using steam reformation—will most likely remain the dominant source of hydrogen in the near- to mid- term. They envisioned that small amounts of hydrogen extracted mostly from natural gas at multiple points distributed around the country would be sufficient to meet initial demand. In addition, this distributed approach requires less capital investment. DOE officials and industry representatives noted that substantial changes to the infrastructure eventually will be needed to not only support large-scale production and delivery of hydrogen, but also to support multiple sources from which to extract hydrogen to minimize reliance on natural gas. As the demand for hydrogen grows, large centralized facilities for extracting hydrogen will be needed to take advantage of economies of scale. The centralized extraction of hydrogen will require deliveries over greater distances and, correspondingly, greater investments in the delivery infrastructure. Similarly, as the demand for hydrogen grows, there must be more stations where consumers can conveniently purchase hydrogen for vehicles or for stationary or portable applications. DOE has effectively solicited industry input and has worked to align R&D priorities, particularly for developing vehicle technologies. However, DOE has just begun to prioritize resources to develop stationary and portable technologies, which are much closer to being ready for commercial application and could play a role in laying the groundwork for vehicle technologies. Industry representatives acknowledge DOE’s efforts, but note that they are too new to evaluate. Nevertheless, industry representatives stated that DOE generally has managed and coordinated its hydrogen R&D resources well. Industry executives told us that DOE’s efforts to involve industry early in the planning stages and its ongoing efforts to solicit industry feedback on priorities have been effective in keeping the R&D agenda focused and headed in the right direction. Although industry representatives have sometimes disagreed about DOE’s priorities, they generally agreed that DOE has institutionalized processes to effectively solicit feedback from industry. Just as importantly, DOE officials noted that being a presidential initiative with congressional backing has helped Hydrogen Fuel Initiative managers to garner support from industry and within the federal government. DOE’s workshops in 2001 and 2002 involved industry and independent experts at the earliest stages of planning an R&D agenda and laid the groundwork for identifying market challenges and technical targets that could lead to the development and deployment of hydrogen and fuel cell technologies. The launch of Hydrogen Fuel Initiative in 2004 accelerated hydrogen R&D efforts, resulting in a more detailed R&D agenda. DOE asked the National Research Council and the National Academy of Engineering to review this agenda and implemented 46 of the National Academies’ 48 recommendations. For example, DOE implemented a systems analysis and integration effort to (1) integrate R&D on hydrogen production, delivery, and storage; and fuel cells; (2) safety codes and standards; (3) monitor progress toward technology targets; and (4) provide education on the benefits of and challenges to transitioning to hydrogen technologies. In addition, the initiative has facilitated ongoing communication with industry through annual merit reviews, workshops, technical teams, HTAC, and other coordination mechanisms. DOE’s annual merit review is a primary way to disseminate information and get feedback on the merit of its hydrogen and fuel cell R&D projects from industry, independent experts, and other DOE officials. The most recent review, held in May 2007, showcased approximately 300 projects, with the principal investigators presenting status and results. Industry representatives stated that annual reviews are useful and have become a valuable tool to provide feedback to DOE on prioritizing the R&D agenda. DOE also has funded a number of workshops to solicit industry input on a range of topics, including fuel cells, education, and codes and standards. For example, DOE’s Office of Science conducted a workshop in May 2003 to identify the key areas where basic science R&D could contribute toward transitioning to hydrogen technologies. The workshop resulted in a report that has served the Office of Science as a guide for continued R&D efforts. In addition, in June 2007, DOE’s hydrogen storage program held a 1-day meeting to identify techniques for enhancing research on advanced hydrogen storage materials, with participants from industry, academia, and DOE’s national laboratories. Industry representatives stated that workshops are an important collaboration channel. To solicit industry feedback on the progress, priorities, and direction of the hydrogen R&D program, DOE established 11 technical teams responsible for reviewing R&D progress in specific technologies. These teams, co-chaired by industry and DOE, meet monthly and include industry representatives with requisite expertise in hydrogen technologies. The technical teams exchange information and jointly review all projects at least once a year. For example, through one of the technical teams on fuel cells, industry provided information on optimal relative humidity when DOE began work on high temperature fuel cells. The technical teams also provide an informal forum outside regular meetings for frequent exchanges among scientists. The National Academies noted the creation of technical teams as an important achievement, and industry representatives stated that tech teams help transfer automakers’ requirements to the R&D portfolio. HTAC, made up of industry executives and outside experts, also provides advice to the Secretary of Energy on technical and programmatic issues related to DOE’s hydrogen R&D program. HTAC hosts periodic meetings, which DOE officials attend, to review budget status, discuss R&D plans, and propose changes. In its September 2007 report to the Secretary of Energy, HTAC recommended, among other things, that DOE elevate the role of hydrogen in the national energy portfolio. HTAC also expressed its pleasure with the DOE hydrogen R&D program’s use of best management practices, including peer review in its solicitation processes, assessment of technical progress, individual project selection and monitoring, and overall program management. DOE also obtains feedback from industry and academia through its Centers of Excellence. To facilitate storage R&D, DOE coordinated the creation of three Centers of Excellence to work on R&D in both applied and fundamental science. Each center is led by a DOE national laboratory and has about 15 industry and academic partners. In addition, a DOE program dedicated to commercialization efforts exchanges information with industry on DOE activities, including hydrogen R&D, and explores potential for commercial development opportunities. Another program focused on market transformation works to build partnerships with industry and federal, state, and local governments to foster the early adoption of hydrogen and fuel cell technologies. Furthermore, DOE is active at the state and local level and participates in numerous organizations that bring together a range of groups to foster the development and deployment of hydrogen technology. For example, DOE is involved in the California Fuel Cell Partnership, a group of auto, fuel, and fuel cell technology companies and government agencies working to deploy fuel cell vehicles on state roads. In response to industry feedback, DOE has shifted R&D priorities and expanded industry participation. For example, during the past decade, DOE funded R&D of on-board fuel processing, the concept of embedding equipment in a vehicle to generate hydrogen from a fuel source such as methanol. In 2004, DOE commissioned the National Renewable Energy Laboratory to convene an independent review panel to provide a technical go/no-go recommendation regarding on-board fuel processing R&D. The panel recommended a no-go decision, and DOE concurred. Automakers praised the decision, realizing that on-board fuel processing R&D was too costly for a technology that did not appear to be viable by the target date. In addition, partly as a result of feedback from auto manufacturers, DOE expanded FreedomCAR in 2003 to include energy companies. The idea stemmed from the need to coordinate the development of vehicles with the fueling infrastructure, involving such major energy companies as ConocoPhilips, British Petroleum (BP), Shell, Chevron, and ExxonMobil. Through FreedomCAR, DOE, energy companies, and car companies conduct joint R&D planning and technical activities. Overall, although industry representatives reflected a wide variety of viewpoints on DOE’s priorities, they generally agreed that DOE had done a good job of soliciting input. A general consensus among senior executives noted that DOE’s processes to solicit industry input and focus on key areas for R&D has been well-organized. The National Hydrogen Association, an industry group, suggested that DOE’s efforts have turned out to be a good investment and praised technical goals and progress. USCAR representatives stated that DOE is placing the right emphasis on the key issues and that domestic auto makers maintain a good relationship with DOE. Industry representatives note that because stationary and portable technologies may have more near-term market potential than vehicle technologies, they may be integral to resolving technical or infrastructure challenges and developing the public acceptance necessary to deploy hydrogen nationally. According to industry representatives, stationary and portable research can benefit hydrogen technology development and maturation, particularly for fuel cell vehicles. For example, suppliers and manufacturers need near-term opportunities to remain in business and to improve manufacturing processes, which will eventually benefit fuel cell vehicles by creating a supply base and fostering innovation. An industry representative noted that parts suppliers otherwise may not survive until vehicle technologies are ready in 10 to 20 years. In addition, HTAC stated that increasing the level of R&D on portable and stationary power systems would reduce the technical and market risks associated with longer-term vehicle applications. Industry has expressed concerns that DOE has focused on developing vehicle technologies and has given less priority to stationary and portable technologies. At its May 2007 meeting, HTAC suggested that DOE has not focused enough on stationary and portable fuel cell R&D. Senior executives of companies told us they had urged DOE to focus more on demonstrating near-term stationary and portable technologies. The U.S. Fuel Cell Council and the National Hydrogen Association also stated that stationary fuel cell research had been overlooked and underfunded. DOE noted that it had focused on vehicle R&D because of the significant energy savings in the transportation sector. Industry representatives stated that DOE has responded to industry’s input. Senior executives from industry told us that DOE’s support for stationary and portable R&D has grown substantially in the past year and that DOE has done a good job of incorporating this R&D into its program. In June 2007, to facilitate early adoption of hydrogen and fuel cell technologies, DOE sought input from industry, non-profit organizations, and local, state, and federal agencies to identify hydrogen and fuel cell applications in stationary and portable power. Such applications could include, for example, backup power installations for telecommunications providers and public schools designated as emergency shelters, warehouse lift-trucks currently employing battery or internal combustion systems, and portable fuel cells for battery operated devices. DOE has also begun to emphasize near-term stationary and portable market applications by providing a grant opportunity for hydrogen and fuel cell systems manufacturing R&D focusing on technologies that are near commercialization. Industry representatives acknowledged DOE’s efforts but noted that these efforts are too new to evaluate because DOE had not devoted as many resources to them as it had to vehicle technologies. A representative from the National Hydrogen Association, however, stated that DOE’s recent emphasis on high-volume manufacturing is a good sign and could facilitate early market penetration of fuel cells. DOE’s interagency coordination efforts among working level managers and scientists have been productive and useful, but coordination with senior officials at the policy level just began with the August 2007 establishment of the Interagency Task Force. At the working level, DOE has established several interagency bodies to facilitate cooperation and share knowledge—in particular, the Interagency Working Group on Hydrogen and Fuel Cells (IWG) has contributed to implementing hydrogen technology partnerships between DOE, DOT, and DOD and has created Web-based tools and joint workshops to facilitate coordination of research activities. At the policy level, however, the Interagency Task Force has not yet clearly defined its overall role and strategy, but members intend to formulate a plan by May 2008. Overall, working level officials—program managers, analysts, engineers and others who implement hydrogen R&D—at the federal agencies primarily involved in hydrogen-related activities generally told us they were satisfied with the level of interagency coordination. The primary coordination mechanism, the IWG, was created in 2003 and is jointly chaired by DOE and the Office of Science and Technology Policy. It provides a forum for coordinating interagency policy, programs, and activities related to safe, economical, and environmentally sound hydrogen and fuel cell technologies. The IWG meets monthly to help prioritize and coordinate the roughly $500-million portfolio of federal hydrogen and fuel cell R&D, part of which is funded by the Hydrogen Fuel Initiative. In addition to DOE, the primary federal agencies involved in hydrogen R&D include: DOT’s hydrogen program, with approximately $1.4 million in annual Hydrogen Fuel Initiative funding, is focused on conducting R&D and deployment activities necessary to safely and reliably prepare the transportation system for hydrogen technology use. Its activities include pipeline technology research aimed at developing methods to safely and efficiently transport hydrogen, codes and standards formulation to ensure an appropriate regulatory regime, and capacity planning to smooth operation of the transportation infrastructure. In addition, DOT has a separately funded a $49 million bus demonstration program to facilitate the development of commercially viable fuel cell technologies in real- world environments. DOD receives no funding under the Hydrogen Fuel Initiative; however, it has several entities involved in hydrogen-related activities. For example, the Defense Logistics Agency has spent $11.7 million on a fuel-cell powered fork lift program and a solid hydrogen storage program, the Army supports a small amount of fuel cell R&D, and the Navy has deployed fuel cells at several installations and is conducting R&D in several areas, including for unmanned underwater vehicles. NASA is the largest user of hydrogen in the United States, employing it as fuel for rocket launches. NASA conducts limited hydrogen-related R&D but is interested in coordinating with DOE on a proposed project to demonstrate stationary fuel cells to generate electricity at NASA’s White Sands Test Facility. The U.S. Postal Service conducted a 3-year hydrogen fuel cell demonstration program with mail delivery vehicles at test sites in Virginia and California. Plans are underway to continue the effort using the next generation of hydrogen vehicles in partnership with General Motors and DOE. In addition, the Postal Service is considering hydrogen technology as an option for its planned replacement of its fleet of about 215,000 vehicles in 2018. The Department of Commerce’s National Institute of Standards and Technology (NIST) is working with federal agencies and standards organizations on a variety of activities including certification of hydrogen fuel dispensers, hydrogen quality standards, building safety standards, and pipeline safety standards. In partnership with DOE, NIST also is conducting manufacturing R&D and imaging research to investigate how water moves through fuel cells to better understand their operation. As the main interagency coordination vehicle, the IWG has contributed to implementing hydrogen technology partnerships among agencies and created communication channels to coordinate R&D activities, such as ad- hoc groups, joint workshops, and Web-based tools. In August 2007, DOE and NIST signed an interagency agreement to coordinate development of standards, test procedures, and test methods for hydrogen fuel purchase and delivery. DOE has also partnered with the Postal Service to field test fuel-cell-powered mail delivery trucks. In addition, recent IWG efforts to highlight near-term opportunities for federal agencies to procure commercially available hydrogen and fuel cell technologies have been successful. For example, the Defense Logistics Agency has announced plans to deploy over 70 fuel-cell-powered forklifts at three defense parts depots in the United States, an initiative that spurred additional cooperation with DOT. Moreover, the Army is demonstrating mobile fuel- cell auxiliary power units, and the Navy has installed solid-oxide stationary fuel cells that supply power for shore facilities. Other IWG activities have resulted in the creation of ad-hoc groups. As a result of a 2005 memorandum of understanding on hydrogen R&D, DOE and the Department of Agriculture established an Ad Hoc Committee on Biomass Production of Hydrogen, which meets just prior to regular IWG meetings and focuses on collaboration related to advancing hydrogen production from biomass and hydrogen-related agricultural applications. Also in 2005, as part of the IWG, DOT established an Ad Hoc Committee on a Regulatory Framework for the Hydrogen Economy that includes DOE, the Environmental Protection Agency, the U.S. Coast Guard, and the Department of Labor. The committee has developed a framework for the safe commercial application of hydrogen and fuel cell technologies. The IWG also facilitated the creation of joint workshops. In April 2005, DOE, DOD, NASA, and the National Science Foundation facilitated a session on small business innovation at the National Hydrogen Association’s annual meeting. That session featured success stories from several small business owners. DOE, NASA, and DOD held a workshop on modeling and simulating hydrogen combustion in February 2006. More recently, in August 2007, NIST and DOE participated in a conference on understanding potential impacts of delivering hydrogen through pipelines. The IWG also has created a publicly accessible Web site, which includes links to federal hydrogen related activities, news, funding opportunities, and regulatory authorities to encourage collaboration among the public sector, private sector, academia, and international scientific community. One tool available online is the regulatory authorities inventory, a DOT-led effort to create a single point of reference for stakeholders to view current U.S. statutes and regulations that may be applicable to hydrogen. DOE established the International Partnership for the Hydrogen Economy (IPHE) in 2003 to provide a working-level coordinating mechanism for more than a dozen partner countries to organize, coordinate, and implement international research, development, demonstration, and commercial utilization activities. IPHE also provides a forum for advancing common policies, technical codes, and standards, and it educates stakeholders on the benefits of, and challenges to, transitioning to hydrogen technologies. Although participation is voluntary, IPHE has contributed to international information exchange, facilitated engagement from senior level officials, and influenced the creation of hydrogen technology road maps in China and other countries. In addition, DOE, DOD, and DOT are collaborating through the IPHE to standardize data collection for all hydrogen fuel vehicles and hydrogen-fueling demonstrations. While IPHE highlights its accomplishments, it also acknowledges room for improvement by, for example, better defining its role and developing performance metrics in the future. DOT officials told us that while overall DOE has ably managed its hydrogen program, some areas of interagency coordination have been more effective than others. For example, DOT and the Defense Logistics Agency conduct joint R&D planning and information sharing, a successful relationship that grew out of the IWG. However, DOT’s Pipeline R&D Program was not included in early discussions at DOE, hampering collaboration and communication on technology development. DOT officials acknowledged that they now are involved in these discussions but cited the importance of ensuring DOT representation at the onset of coordination efforts. To ensure appropriate authority inside each agency for making hydrogen- related budget and policy decisions, HTAC recommended in October 2006 that the IWG be elevated to require participation of an assistant secretary or higher. In response, DOE created the Interagency Task Force—a new entity composed of deputy assistant secretaries, program directors, and other senior officials—which held its inaugural meeting August 2007. Because the organization was created recently, its membership is still in flux as the most appropriate participants are being identified. The goals of the task force are to increase understanding of available hydrogen and fuel cell technologies and how they can contribute to the agencies’ energy and environmental goals, work together to identify concrete opportunities for the federal government to provide leadership by being an early adopter, use government procurement and leadership to rapidly deploy technology and facilitate its introduction into the marketplace, and define new opportunities through interaction and exchange of ideas. Although the task force outlined a set of broad goals, it did not clearly define its responsibilities or strategy for achieving these goals. Member agencies intend to develop a more detailed plan that will guide efforts, identify actions that can be taken, and establish targets by May 2008. The task force assigned IWG the responsibility for creating the plan and agreed to review each agency’s role, responsibilities, and stake in hydrogen technology at the IWG’s December 2007 meeting. In August 2007, HTAC criticized DOE for taking too long to respond to HTAC’s recommendation and for not securing participation of assistant secretaries, participation that HTAC believes is necessary for making hydrogen budget and policy decisions. Similarly, DOT officials told us that the Interagency Task Force was supposed to be created specifically at the senior level so participants could influence budget and policy matters, but too many alternates were present at the first meeting, reducing its potential effectiveness. DOT officials added that if membership continues to shift or be inconsistent, then lack of continuity will hinder progress and make it difficult to achieve goals. DOE officials stated that the level of membership is adequate because deputy assistant secretaries, program directors, and other senior officials are high enough to make decisions, influence policy, and impact the implementation of programs. Some task force members have expressed concerns about lack of a common vision among agencies, including a shared view of timelines, milestones, and approaches, in part because of differing roles, responsibilities, and stakeholders and because of the fact that no overarching authority guides all government hydrogen R&D. For example, although DOE has clearly outlined a 2015 technology readiness goal suitable for its mission, DOT may need to develop a regulatory framework earlier to address industry’s intent to begin deploying fuel cell vehicles as early as 2012. The Hydrogen Fuel Initiative has made important progress in developing hydrogen technologies in all of its technical areas in both fundamental and applied science. DOE and industry officials attribute this progress to DOE’s (1) planning process that involved industry and university experts from the earliest stages; (2) use of annual merit reviews, technical teams, centers of excellence, and other coordination mechanisms to continually involve industry and university experts to review the progress and direction of the program; (3) emphasis on both fundamental and applied science, as recommended by independent experts; and (4) continued focus on such high priority areas as hydrogen storage and fuel cell cost and durability. Although DOE has made important R&D progress, its 2015 technology readiness target is very ambitious, requiring scientific breakthroughs in hydrogen storage, for example. Budget constraints and technical challenges have led DOE to push back its targets for providing certain technologies to automakers from 2015 to 2017 or later, which according to DOE, generally still lies within the window for the automobile companies to provide hydrogen fuel cell vehicles by 2020. However, DOE has not updated its 2006 Hydrogen Posture Plan’s overall assessment of what the department reasonably expects to achieve by its technology readiness date in 2015 and how this updated assessment may differ from prior posture plans. DOE also has not identified R&D funding needed to achieve the 2015 target. This information is important to the Congress and industry as they set priorities and make funding decisions. Furthermore, developing a nationwide commercial market for hydrogen fuel cell vehicles is expected to cost tens of billions of dollars for production facilities, fueling stations, pipelines, and other support infrastructure and take decades to achieve, requiring a sustained investment by government and industry in R&D and the infrastructure. To accurately reflect progress made by the Hydrogen Fuel Initiative and the challenges it faces, we recommend that the Secretary of Energy update the Hydrogen Posture Plan’s overall assessment of what DOE reasonably expects to achieve by its technology readiness date in 2015, including how this updated assessment may differ from prior posture plans and a projection of anticipated R&D funding needs. We provided DOE with a draft of this report for its review and comment. In written comments, DOE agreed with our recommendation, stating that it plans to update the Hydrogen Posture Plan during 2008 to reflect the progress made and any changes to the activities milestones, deliverables, and timeline. (See app. II.) However, DOE found the title of the draft report to be confusing, stating that R&D on hydrogen technologies would inevitably continue beyond 2015. In response, we revised the title to highlight the need for DOE to update what it expects to achieve by its 2015 target. DOE also disagreed with our statement that it has not determined what reasonably can be achieved by 2015 for use in a 2020 vehicle, citing extensive efforts to assess the R&D program’s progress. In response, we clarified that our concern is that the Hydrogen Posture Plan, which provides the Congress and other outside stakeholders with an assessment of progress, needs to be updated to identify what DOE reasonably expects to achieve by its technology readiness date in 2015. In addition, DOE provided comments to improve the draft report’s technical accuracy, which we have incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Energy, the Director of the Office of Management and Budget, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or gaffiganm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To assess the extent to which the Department of Energy’s (DOE) Hydrogen Fuel Initiative has made progress in meeting its R&D targets, we reviewed documents and interviewed DOE program managers, national laboratory scientists, company and industry association executives, independent experts, and state government officials. More specifically, we reviewed DOE’s 2004 and 2006 Hydrogen Posture Plans and R&D project reports, attended DOE’s annual review of its projects in May 2007, and interviewed DOE hydrogen program managers and scientists at DOE’s National Renewable Energy Laboratory and Los Alamos National Laboratory. We also reviewed the R&D plans, technology roadmaps, assessments and reviews from each of DOE’s programs, including Energy Efficiency and Renewable Energy, Fossil Energy, Nuclear Energy, and Science, and from several of the technical teams that DOE established to review R&D progress in specific technologies. In addition, we spoke with members and attended meetings of the Hydrogen and Fuel Cell Technical Advisory Committee, interviewed industry representatives, and reviewed industry assessments of DOE’s progress in developing and demonstrating vehicle, stationary, and portable technologies. Furthermore, we reviewed reports of the National Academies of Sciences and Engineering on the hydrogen R&D program and spoke with cognizant officials. To determine the extent to which DOE has worked with industry to set and meet R&D targets, we reviewed pertinent documents, assessed DOE’s processes for soliciting industry input, and attended a meeting of the fuel cell technical team at Los Alamos National Laboratory. We also interviewed cognizant DOE managers and scientists and executives of car manufacturers, energy companies, utilities, hydrogen producers, fuel cell manufacturers, and suppliers of hydrogen-related components about DOE’s processes for soliciting industry input and we toured several industry facilities. To determine the extent to which DOE has worked with other federal agencies to develop and demonstrate hydrogen technologies, we reviewed pertinent documents and spoke with officials at DOE, the Department of Transportation, the Department of Defense, the Department of Commerce, the National Aeronautics and Space Administration, and the U.S. Postal Service. We also attended the Interagency Task Force’s first meeting in August 2007. We conducted this performance audit from March through December 2007 in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Richard Cheston, Assistant Director; Robert Sanchez; Thomas Kingham; Marc Castellano; and Alison O’Neill made key contributions to this report. Also contributing to this report were Kevin Bray, Virginia Chanley, Patrick Gould, Anne Stevens, and Hai Tran.
The United States consumes more than 20 million barrels of oil each day, two-thirds of which is imported, leaving the nation vulnerable to rising prices. Oil combustion produces emissions linked to health problems and global warming. In January 2003, the administration announced a 5-year, $1.2 billion Hydrogen Fuel Initiative to perform research, development, and demonstration (R&D) for developing hydrogen fuel cells for use as a substitute for gasoline engines. Led by the Department of Energy (DOE), the initiative's goal is to develop the technologies by 2015 that will enable U.S. industry to make hydrogen-powered cars available to consumers by 2020. GAO examined the extent to which DOE has (1) made progress in meeting the initiative's targets, (2) worked with industry to set and meet targets, and (3) worked with other federal agencies to develop and demonstrate hydrogen technologies. GAO reviewed DOE's hydrogen R&D plans, attended DOE's annual review of each R&D project, and interviewed DOE managers, industry executives, and independent experts. DOE's hydrogen program has made important progress in all R&D areas, including both fundamental and applied science. Specifically, DOE has reduced the cost of producing hydrogen from natural gas, an important source of hydrogen through the next 20 years; developed a sophisticated model to identify and optimize major elements of a projected hydrogen delivery infrastructure; increased by 50 percent the storage capacity of hydrogen, a key element for increasing the driving range of vehicles; and reduced the cost and improved the durability of fuel cells. However, some of the most difficult technical challenges lie ahead, including finding a technology that can store enough hydrogen on board a vehicle to achieve a 300-mile driving range, reducing the cost of delivering hydrogen to consumers, and further reducing the cost and improving the durability of fuel cells. The difficulty of overcoming these technical challenges, as well as hydrogen R&D budget constraints, has led DOE to push back some of its interim target dates. However, DOE has not updated its 2006 Hydrogen Posture Plan's overall assessment of what the department reasonably expects to achieve by its technology readiness date in 2015 and how this may differ from previous posture plans. In addition, deploying the support infrastructure needed to commercialize hydrogen fuel-cell vehicles across the nation will require an investment of tens of billions of dollars over several decades after 2015. DOE has effectively involved industry in designing and reviewing its hydrogen R&D program and has worked to align its priorities with those of industry. Industry continues to review R&D progress through DOE's annual peer review of each project, technical teams co-chaired by DOE and industry, and R&D workshops. Industry representatives are satisfied with DOE's efforts, stating that DOE generally has managed its hydrogen R&D resources well. However, the industry representatives noted that DOE's emphasis on vehicle fuel cell technologies has left little funding for stationary or portable technologies that potentially could be commercialized before vehicles. In response, DOE recently increased its funding for stationary and portable R&D. DOE has worked effectively with hydrogen R&D managers and scientists in other federal agencies, but it is too early to evaluate collaboration among senior officials at the policy level. Agency managers are generally satisfied with the efforts of several interagency working groups to coordinate activities and facilitate scientific exchanges. At the policy level, in August 2007, DOE convened the inaugural meeting of an interagency task force, composed primarily of deputy assistant secretaries and program directors. The task force is developing plans to demonstrate and promote hydrogen technologies.
The passenger airline industry is primarily composed of network, low- cost, and regional airlines. Network airlines were in operation before the Airline Deregulation Act of 1978 and support large, complex hub-and- spoke operations with thousands of employees and hundreds of aircraft. These airlines provide service at various fare levels to a wide variety of domestic and international destinations. Although this study focuses primarily on domestic competition, network airlines also serve international destinations. By some estimates, nearly 40 percent of network airlines’ revenue is from international service, so domestic service is often aligned with their international networks. Low-cost airlines generally entered the market after deregulation and tend to operate less costly point-to-point service using fewer types of aircraft. Low-cost airlines are just beginning to serve international markets, mostly in the Caribbean and Latin America. Some airlines, like Allegiant Air and Spirit Airlines, are referred to as ultra-low-cost because they provide service often to leisure destinations at discount fares, but with higher optional fees, such as for carry-on and checked baggage. Regional airlines operate smaller aircraft—turboprops or regional jets with up to 100 seats—and generally provide service to smaller communities under capacity purchase agreements with network airlines. Some regional airlines are owned by a network airline, while others are independent. Regional airlines operate about half of all domestic flights and carry about 22 percent of all airline passengers. We have previously found that the financial performance of the deregulated airline industry has been characterized by extremely volatile earnings. Despite periods of strong growth and earnings, some airlines have taken advantage of Chapter 11 bankruptcy protection to reorganize and address financial commitments and/or pursued mergers during times of substantial financial distress, although in some cases airlines have entered Chapter 7 bankruptcy proceedings to cease operations. Some analysts view the industry as inherently volatile due to key demand and cost characteristics that make it difficult for airlines to quickly reduce capacity in periods of declining demand. For example, airlines have high fixed costs and cannot quickly reduce either flight schedules or employment costs when demand for air travel slows—the latter due in part to commitments made within collective-bargaining agreements and other types of contracts and leases. As we have previously noted, the industry is also highly susceptible to external shocks that decrease demand, such as those caused by wars, terrorist attacks, health events such as the SARS epidemic, or fuel price volatility. The airline industry has experienced considerable merger and acquisition activity, especially following deregulation in 1978. Since 2000, economic pressures—including volatile fuel prices, the financial crisis, and ensuing economic recession of 2007–2009—sparked a wave of consolidation across the airline industry. For instance, Delta acquired Northwest in 2008, United and Continental merged in 2010, Southwest acquired AirTran in 2011, and US Airways and American Airlines agreed to merge in 2013 and received U.S. District Court approval for the merger in April 2014. Figure 1 provides a timeline of mergers and acquisitions for the four largest surviving domestic airlines—American, Delta, Southwest, and United—based on the number of passengers served. These four airlines account for approximately 85 percent of passenger traffic in the United States in 2013. A key financial benefit that airlines consider in a merger is the potential for increased revenues generated through additional demand (generated by more seamless travel to more destinations), increased market share, and higher fares on some routes. Airlines also consider cost reductions that may result from combining complementary assets, reducing or eliminating duplicative activities and operating costs, and reducing capacity when merging with or acquiring another airline. For example, the combined airlines may be able to reduce or eliminate duplicative service, labor, and operations costs or achieve operational efficiencies by integrating computer systems and similar airline fleets. The most recent wave of consolidation has raised new questions about the state of competition in the industry. Economic theory suggests that competition is strongest when there are many firms in a market, and no firm has a substantial share of that market. By contrast, competition may be weaker when there are only a small number of firms because they may be able to exercise market power—in general terms, the ability to raise and maintain prices above those that would be set in a competitive market. However, if new firms are able to readily enter the market and effectively compete, they may mitigate the potential anti-competitive effects of a small number of incumbent firms, thus reducing the incumbent firms’ market power. The intensity of competition in a market is not solely driven by the number of firms or the ease of entry, however. In some cases, competition can be robust in a market with only a few firms even when entry is difficult. Although recent mergers have reduced the total number of domestic airlines, consumers are less directly affected by changes at the national level than at the individual route level. Consumers purchase seats for air transportation from one city to another. As such, they are likely to be more concerned about the number of airlines serving any specific route. Thus, a “city-pair,” or traffic between two cities, is typically viewed as the basic relevant market for airline travel, including by DOJ, the agency charged with reviewing U.S. airline mergers. The relevant market in a competitive analysis is one in which the good sold by a set of firms is seen by consumers as having some degree of substitutability, such that if one firm were to raise its prices, some consumers would see the good available from other firms as a reasonable substitute and would choose to buy the good from those other firms. If a person wants to travel from Seattle to Detroit, for example, a ticket from Seattle-Tacoma International Airport to Washington Dulles International Airport would not be a substitute. When there is more than one airport in a metropolitan area for a consumer to choose from, however, the relevant market analysis could focus on an “airport-pair,” instead of a city pair. For example, there are two major airports in the Washington metropolitan area—Washington Dulles International Airport and Ronald Reagan Washington National Airport—and a third nearby airport in Baltimore. Some travelers planning to fly from Seattle to Washington, D.C. could view a ticket to Baltimore/Washington Thurgood Marshall International Airport as a reasonable substitute for a ticket to Ronald Reagan Washington National Airport. In addition, travel can occur through nonstop flights and connecting hubs. While some travelers (mostly business travelers) may be willing to pay more for the convenience of nonstop flights and would view connecting flights as a poor substitute, others might weigh the potential extra cost of nonstop flights more heavily and choose a less expensive connecting option. A starting point for any assessment of competition in an industry is an evaluation of market structure characteristics, including market concentration and the number of effective competitors. These are relevant indicators of the potential degree of competition because, in the absence of new entry, having fewer competitors may lead to adverse competitive effects such as higher prices and reduced consumer choices. We have previously examined a number of these market structure characteristics, including: the average number of effective competitors in different segments of the market; the types of airlines, including the presence of network and low-cost airlines, in the market; airline market share of passengers at the route and airport level; and barriers to entry, including practices or conditions that may impede a firm’s ability to enter a market. A full competitive market analysis of the domestic airline industry—which we do not undertake in this report—would include a review of factors beyond solely market structure, including the likelihood that airlines would coordinate their behavior in terms of marketing or pricing, as well as the ease of entry that could negate market power. Additionally, in the case of a merger analysis, possible benefits related to the merger, such as enhanced innovation and economic efficiencies, would also be considered. Both DOJ and DOT play a role in reviewing airline mergers and acquisitions. DOJ principally uses the analytical framework established in the Horizontal Merger Guidelines to analyze whether a proposed merger or acquisition involving actual or potential competitors raises antitrust concerns—in other words, whether the proposal will likely create, enhance, or entrench market power or facilitate its exercise. As part of its analysis, DOJ uses the Herfindahl-Hirschman Index (HHI) to assess whether a merger is likely to significantly increase concentration and raise anti-competitive concerns in the markets (principally, city-pairs) in which airlines operate. Within the context of its air-carrier certification responsibilities, DOT also conducts analyses of the merits of any airline merger and acquisition and submits its views and relevant information in its possession to DOJ. DOT also provides some essential data—for example, the airlines’ routes and passenger traffic—that DOJ uses in its review. Sustained airline profits since 2009 have bolstered the financial health of the U.S. passenger airline industry. Our analysis of the latest available financial data reported by airlines to DOT showed that the industry generated operating profits of approximately $21.7 billion from 2007 through 2012. Although the financial performance of individual airlines differed, network airlines as a whole generated operating profits of approximately $12 billion from 2007 through 2012, while low-cost airlines and regional airlines generated profits of approximately $6.1 billion and $3.6 billion respectively over the same period. This recovery follows operating losses of $5.6 billion for the U.S. passenger airline industry as a whole in 2008, due largely to the economic recession and volatility in the price of fuel. Figure 2 shows operating profits and losses for U.S. passenger airlines since 2007. Recent efforts by certain airlines to return profits to shareholders are another indication of the industry’s improved financial health since the economic recession of 2007–2009. For example, Delta Air Lines paid a quarterly dividend in 2013—its first since 2003—and plans to pay $1 billion in dividends to its shareholders over the next several years. The airline also announced a program to repurchase $500 million in shares of its stock by June 2016 and provided $506 million in profit-sharing bonuses for its employees in February 2014. Industry analysts we spoke with said that other network airlines would likely follow Delta and introduce dividends in the near term. Additionally, Southwest—the only airline offering a share dividend previously—quadrupled its quarterly dividend in May 2013, increased its share buy-back program, and announced $228 million in annual profit-sharing with its employees in 2014, an increase from $121 million in 2013. We found that improved profitability has enabled airlines to raise their liquidity in recent years by increasing their total cash reserves. Liquidity levels are especially important in the airline industry because cash balances help the airlines withstand potential industry shocks, such as lower travel demand or more volatile fuel prices, as well as pay down debt and reduce the risk of bankruptcy. U.S. airlines as a whole have increased their cash reserves from approximately $8 billion in 2007 to approximately $13 billion in 2012. Network airlines have also generally reduced their long-term debt and certain airlines improved their credit position. Network airlines reduced long-term debt 3.7 percent (or approximately $1.2 billion) from 2007 to 2012, while low-cost airlines saw an increase in their long-term debt of 1.6 percent (or approximately $97 million) over this period. Debt reduction by network airlines has resulted in some improvement in credit profiles and credit rating upgrades for certain airlines. For example, in June 2013 Fitch Ratings Service revised its ratings outlook for Delta Air Lines from stable to positive, and in March 2014 upgraded the issuer default rating from B+ to BB-. Among low-cost airlines, Southwest Airlines remains the only airline with a credit rating that is considered investment grade, which indicates relatively low to moderate credit risk. Fitch affirmed the airline’s rating at BBB in September 2013. Improved credit ratings help airlines lower the cost of capital by enabling them to obtain financing—including the refinancing of existing debt—at more advantageous terms. Credit rating analysts we spoke to emphasized, however, that the industry remains significantly leveraged with debt, which may negatively affect their credit ratings. Growth in revenues has been a key driver in the U.S. airline industry’s improved financial health and profitability. This growth has been aided by three factors: (1) an increase in passenger traffic; (2) capacity restraint (i.e., limiting the supply of available seats in relation to the level of demand), which has contributed to a rise in airfares; and (3) increased revenues from ancillary fees. Total operating revenues decreased by nearly $22 billion from 2008 to 2009 due largely to the recession, but have since exceeded pre-recession levels. The industry’s operating revenues grew 29 percent from approximately $121 billion in 2009 to $156 billion in 2012 (see fig. 3). During this period, network airline operating revenues increased 29 percent (from $92.5 billion to $120 billion), while operating revenues for low-cost airlines grew 43 percent (from $19 billion to $27 billion). Although airlines’ operating revenues have increased in recent years, net profit margins for the industry remain lower than those for most other industries. According to an industry association, for example, operating profits for nine U.S. passenger airlines in 2013 were 4.9 percent of total operating revenues, as compared to the Standard & Poor’s 500 Index industry average, which was twice that percentage. A recovery in domestic passenger traffic since 2009 has been a key factor in the growth in airline revenues and industry profitability. Total domestic airline passenger traffic, as measured by revenue passenger miles (RPMs) (e.g., one fare-paying passenger transported one mile) dropped about 8 percent from approximately 579 billion RPMs in 2007 to 532 billion RPMs in 2009 largely due to the economic recession, and recovered from 2009 through 2012 to approximately 575 billion RPMs. Restraint in airline capacity—as measured by the supply of available seat miles—has also contributed to industry profitability since 2007 by allowing increased revenues at lower costs. Until recently, it was common in the U.S. airline market for any reduction in capacity to be quickly replaced. For example, we have previously found that although one airline may reduce capacity or leave the market, capacity has tended to return relatively quickly through new airline entry or expansion by an existing airline. In fact, we found in 2008 that some U.S. airline industry recoveries stalled because airlines grew their capacity so quickly—either by adding additional flights or flying larger aircraft with more seats in an effort to gain market share—that their ability to charge profitable fares was undermined. This dynamic appears to be changing in recent years, however. Several industry experts told us that network airlines responded to high fuel prices and declining demand during the economic recession, as expected, by reducing the supply of available seats. For example, network airline domestic capacity decreased nearly 10 percent from 446 billion available seat miles in 2007 to 403 billion available seat miles in 2009. However, unlike after other industry downturns, network airlines have not responded to rising demand for air travel in the last few years by increasing capacity, as available seat miles essentially remained flat (a decline of about 1 percent) from 2009 through 2012 as shown in figure 4 below. Domestic capacity has remained flat while domestic RPMs have increased since 2009, contributing to an increase in unit revenues. Unit revenues rose for network and low-cost airlines from 2007 to 2008 and then fell from 2008 to 2009 largely due to the economic recession. From 2009 to 2012, unit revenues for both segments increased. Specifically, as shown in figure 5, over that 4-year period, network airlines’ unit revenues increased 23 percent (from approximately $0.11 to $0.14 per available seat mile), while low-cost airline unit revenues rose approximately 27 percent (from approximately $0.10 to $0.13). As demand has increased, capacity restraint has resulted in higher airfares. For example, average one-way domestic fares not including taxes or other fees increased approximately 9 percent from $184.92 in 2007 to $201.00 in 2012 for network airlines, and approximately 17 percent from $117.37 to $137.00 for low-cost airlines. Network airlines have readjusted capacity to add available seats on more profitable routes, particularly those to international destinations. In 2007, approximately 63 percent of network airlines’ available seat miles were domestic and 37 percent were international. In 2012, network airlines’ international available seat miles represented 42 percent of their total capacity. Network airlines are shifting their focus to international routes, in part, because they are more profitable and in these markets they face less competition from low-cost airlines, which provide predominantly domestic service. In addition, as we found in 2008, international routes provide additional passenger flow and revenue because passengers often travel through network airlines’ domestic networks to reach the departure airport for their international connection. Airline revenues have also been supplemented by the growth in ancillary fees for optional services. These include fees for services that were previously included in the price of airfare, such as checked bags, early boarding, seat selection, and meals, and for new services that were not previously available like Wi-Fi access and other entertainment options. In addition, Delta, United, and American have increased their ticket- change fees on nonrefundable tickets to as much as $200. According to industry experts, ancillary fees have been beneficial for airlines by enabling them to collect revenues that are related to the costs imposed by individual passengers, in contrast to the previous approach in which airlines spread the costs associated with these services equally across all travelers through fares, regardless of whether all passengers actually used the specific services. Ancillary fees comprise an increasing proportion of airline operating revenues, although the total amount is unclear because airlines are only required to report their checked bag and reservation change fees. In 2012, the U.S. airline industry generated approximately $6 billion in checked baggage and reservation change fees, up from approximately $1.4 billion in 2007. Revenues from checked baggage and reservation change fees reported by network airlines have grown from about 1 percent of total operating revenues in 2007 to approximately 4 percent in 2012. Checked baggage and reservation change fees collected by network airlines increased from approximately $1.2 billion in 2007 to $5.1 billion in 2012. Over the same period, checked baggage and reservation change fees reported by low-cost airlines increased from approximately $183 million (about 1 percent of total operating revenues) in 2007 to approximately $892 million (about 3 percent) in 2012. Ultra low-cost airlines like Allegiant Air and Spirit Airlines that offer low fares are particularly reliant on ancillary fees. For example, revenues from checked baggage and change fees reported by Spirit Airlines grew from nearly 3 percent of total operating revenues in 2008 to almost 15 percent in 2012. Efforts by network airlines to reduce costs have also been a key factor in the improved financial performance of the U.S. airline industry. We have previously found that bankruptcy restructuring during the last decade played a key role in enabling network airlines to reduce costs. The bankruptcy process enabled Delta Air Lines and American Airlines to cut their costs by negotiating contract and pay concessions from their labor unions and through bankruptcy restructuring and personnel reductions. Bankruptcy restructuring also allowed some large airlines to significantly reduce their pension expenses by terminating their pension obligations and shifting claims to the Pension Benefit Guarantee Corporation. Network airlines have also accomplished cost reductions by more efficiently managing capacity. As previously mentioned, there have been four mergers and acquisitions involving major airlines since 2007, including Delta-Northwest (2008), United-Continental (2010), Southwest- AirTran (2011), and American-US Airways (2013). These mergers and acquisitions allowed the airlines to achieve efficiencies by reducing redundant capacity and eliminating inefficient operations at hub airports. Prior to their merger, for example, Delta used Cincinnati as a hub for air traffic in the Midwest, while Northwest relied on Memphis as its hub in the Southeast. Through its merger with Northwest, however, Delta gained a more attractive hub for Midwestern traffic in Detroit to accompany its hub in Atlanta and subsequently downsized Cincinnati and Memphis as hubs in its network. Low-cost airlines have not achieved the same cost reductions since 2007 that network airlines have accomplished, and instead have experienced rising unit costs. For example, fuel costs rose for both network and low- cost airlines during the recent recession, and now comprise a greater percentage of airlines’ operating costs. From 2007 through 2012, for example, fuel costs grew from 31 to 38 percent of operating costs for low- cost airlines, and from 26 to 29 percent of network airline operating costs. Much of this growth for low-cost airlines can be attributed to Southwest Airlines, the largest low-cost airline. Southwest’s fuel costs grew from 30 percent of operating costs in 2007 to 37 percent in 2012. According to an industry analyst’s report, the impact of higher fuel prices has been greater for low-cost airlines. This has occurred, in part, because low- cost airlines have reduced aircraft utilization, or the average number of hours that an aircraft is in flight in a 24-hour period. For example, higher fuel prices have made off-peak flying—i.e., flights that depart in the early morning or late evening carrying fewer passengers—less attractive for low-cost airlines as these flights are less profitable, and unit costs increased as a result. Non-fuel unit costs, measured as cost per available seat mile excluding fuel costs, have also steadily increased for low-cost airlines since 2007, while network airlines’ non-fuel unit costs have only slightly increased. A 2008 academic study found that the non-fuel cost advantage (excluding fuel and transport expenses) low-cost airlines have had over network airlines narrowed from 2000 to 2006, and we found that this trend has continued to 2012, as shown in figure 6 below. Non-fuel unit costs for network airlines increased about 14 percent from approximately $0.08 per available seat mile in 2007 to $0.09 in 2012, while low-cost airline non- fuel unit costs rose nearly 24 percent from approximately $0.06 to $0.08. An industry analyst report attributes the increase in low-cost airlines’ non-fuel unit costs to the effects of the recent recession, which, by slowing low-cost airline growth, led to increased average compensation and maintenance costs for low-cost airlines as their fleets and workforce have matured. Although the gap between network and low-cost airlines’ non-fuel unit costs has narrowed since 2007, some academic experts point to a structural gap in costs between network and low-cost airlines that is unlikely to reduce further, as the costs associated with the extensive networks and air-transportation service network that airlines provide are inherently greater than those for low-cost airline service. Since 2007, there has been little change in the average number of competitors in the most heavily traveled domestic markets. In addition, the markets serving the most passengers were less concentrated than the markets serving the fewest passengers. These results do not factor in market changes from the 2013 merger between American Airlines and US Airways, but they do account for some of the market changes that may have occurred through the other three mergers that occurred from 2008 to 2011. The effect of airline mergers on the structure of individual city-pair markets may not be immediate as it can take years for merging airlines to fully integrate. Fewer competitors might have been expected in some markets as a result of the merger activity, and although we did find fewer competitors in some markets, in other markets we found that the number of competitors actually increased. The latter results may have occurred in part due to growth in network size and new connections created since the mergers. In addition, we found that since 2007, low-cost airlines have expanded into the largest passenger markets, adding new competitors in some markets where mergers may have reduced competition. To perform our market structure analysis, we used DOT’s Origin and Destination Survey data, which is a 10-percent quarterly sample of all airline tickets sold. We assessed approximately 91,000 U.S. markets with passenger traffic each year from 2007 through 2012; therefore, several mergers were completed during the time period covered by this analysis. We filtered the data to include only those markets with at least 520 one- way passengers or 1,040 round-trip passengers because markets with fewer passengers would be too small to ensure statistical accuracy. We also excluded markets in Alaska and Hawaii. This filter removed 6 percent of the passengers from the full dataset. We primarily used the city-pair market as our unit of analysis, meaning that travel between two metropolitan areas is the relevant market. For each of the 6 years, we then categorized the markets into quintiles based on the total number of passengers in our sample, and for every year, traffic was segregated so that each quintile contained approximately 20 percent of the total passengers for that year. However, because the passenger traffic is not evenly distributed across all city-pair markets, the corresponding number of city-pair markets in each quintile differs substantially (see table 1). Because certain routes carry many more passengers than others, the first quintile includes the most heavily traveled city-pair markets, while the fifth quintile includes the least-traveled city-pair markets in the sample. For instance, in 2012, about 83 million passengers—or about 20 percent of the approximately 411 million passengers in our sample—traveled in 37 city-pair markets in the first quintile. We refer to the markets in the first quintile as the largest markets. Examples include New York to Los Angeles and Washington, D.C. to Boston. Conversely, approximately 82 million passengers in the fifth quintile spread their flying across more than 9,300 markets, meaning that these city-pair markets are some of the least-traveled domestic routes. Likewise, we refer to the markets in the fifth quintile as the smallest markets. Examples include Pittsburgh, Pennsylvania to Bangor, Maine, and Spokane, Washington to Billings, Montana. We found that there has been little change in the average number of effective competitors across the markets in our analysis from 2007 through 2012. For example, during this period, the average number of effective competitors each year ranged from 4.3 to 4.5 in the markets represented in the first quintile (see fig. 7). The average number of effective competitors in the markets represented in the second quintile increased slightly from 3.7 in 2007 to 3.9 in 2012. On the other end of the spectrum, there has been a small decrease in the average number of effective competitors serving the smallest markets represented in the fifth quintile. Specifically, the average number of competitors fell from 3.3 to 3 between 2007 and 2012 in these small markets. See appendix II for the full results of our analysis. Across all city-pair markets in our sample, we also observed a small increase in the percentage of dominated markets—in which one airline has at least 50 percent of all passenger traffic—but at the same time, a decrease in the percentage of monopoly markets, which are markets with only one provider. In 2007, approximately 72 percent of all city-pair markets were dominated markets; however, about 77 percent of all markets were dominated in 2012. Consequently, while the average city- pair market quintile may have between 3 and 4.5 effective competitors, as shown in the figure above, more than three-quarters of markets are dominated by a single airline. Although there were more dominated markets by the end of 2012, further analysis shows that the number of monopoly markets decreased from 1,712 in 2007 to 1,566 in 2012 (approximately a 9 percent decrease). Overall, we found that nearly all of the monopoly markets were the least-traveled markets, which is not surprising as markets with lower demand would be less likely to support more than one airline. concentration may not fully reflect the competitive significance of firms in the market, or the extent to which other factors—such as entry conditions—might also influence the extent of competition in the market. We found that the markets serving the most passengers were less concentrated than the markets serving the fewest passengers. Moreover, there was a slight reduction in concentration in the highest-traveled markets represented in the first quintile from 2007 through 2012 (see fig. 8). The notable exception to that trend is the slight increase in concentration in those markets beginning in 2011. This corresponds to the slight decrease in effective competitors in those markets during this same time period and may represent the effect of recent airline mergers as consolidation has reduced the number of competitors overall. In the smallest passenger markets represented in the fifth quintile, market concentration as measured by the HHI has increased from 2009 through 2012. Because our analysis of effective competitors and market concentration is an average over a large number of markets, substantial changes that have occurred in some markets since 2007 may be obscured. Several examples help illuminate some of the changes in the number of competitors in certain markets: New York City (JFK) to Los Angeles (LAX): The number of effective competitors offering direct or connecting service in this first-quintile market increased from three to five between 2007 and 2012, as new low-cost airlines entered this market. In 2007, this market was dominated by one airline, but by 2012 no airline had more than 28 percent of the total passenger traffic. Salt Lake City (SLC) to Memphis (MEM): The number of effective competitors offering direct or connecting service in this fifth-quintile market fell from six to two between 2007 and 2012, in part, because of airline consolidation and changing airline business strategies, such as decisions to reduce service to former hubs. In 2007, no airline had more than 44 percent of the market share; however, by 2012 Delta Air Lines had over 80 percent of the market, reflecting a high degree of concentration. Boise, Idaho (BOI) to Bozeman, Montana (BZN): The number of effective competitors offering direct or connecting service in this fifth- quintile market fell from three to one from 2007 to 2012. Despite greater consolidation in the U.S. airline industry and restraint on the part of airlines in managing capacity, two factors may help explain why many markets maintained approximately the same number of effective competitors: Mergers created new connections: When two airlines merge and combine their networks, the merged airline can connect consumers to more destinations within its network than previously possible. One rationale given for the mergers between Delta Air Lines and Northwest Airlines and United Airlines and Continental Airlines was the greater scope and scale of the combined network. We found in 2010 that merging two networks expands choice by increasing the number of possible routings served by a network, as well as the number of passengers who can be served, and the ways that they can be served. For example, we found in 2010 that the combination of United and Continental created a new effective competitor in 173 markets affecting 9.5 million people. For example, before the merger, United provided service to Hector International Airport in Fargo, North Dakota, and Continental provided service to Rick Husband Amarillo International Airport in Amarillo, Texas, but there was no connection between these two communities on either United or Continental. Beginning in 2012, the new United Airlines began providing connecting service via Denver International Airport. Low-cost airlines have expanded into new markets: Based on our analysis, we found that low-cost airlines expanded most rapidly into the largest passenger markets between 2007 and 2012. For example, in 2007 there was an average of 1.7 low-cost airlines in the largest passenger markets represented in the first quintile of our analysis, but by 2012 there were 2.3 low-cost airlines on average in those markets. For example, low-cost airline entry provided new competitors in the New York-to-Los Angeles market. In the smallest passenger markets, i.e., the fifth quintile, the number of low-cost airlines has essentially remained flat between 2007 and 2012. Additional changes to the structure of the market may occur after the three recent airline mergers are fully implemented and conditions for approving the fourth and most recent merger are fully met, as well as due to other economic circumstances. We have found that it can take some time for airlines to merge their operations, technologies, and labor forces. For instance, in 2013 we found that United struggled to integrate computer and reservation systems following its merger with Continental in 2010. Also, pursuant to an agreement with the states that had joined the DOJ action to enjoin the proposed merger between American and US Airways, the new American Airlines agreed to keep seven current hubs for a period of 3 years. Those hubs include Charlotte Douglas International Airport, Chicago O’Hare International Airport, Los Angeles International Airport, Miami International Airport, John F. Kennedy International Airport, Philadelphia International Airport, and Phoenix Sky Harbor International Airport. However, the airline’s business strategy could change in the future. For instance, even though in 2010 the state of Ohio and United signed a similar agreement that guaranteed hub-level service at Cleveland Hopkins International Airport, United recently announced that it would no longer be using that airport as a hub. We also evaluated trends in the number of effective competitors and concentration in terms of the distance of the market. We found that longer-distance markets (greater than 1,000 miles) continue to have more competitors than shorter-distance markets (less than 250 miles) and that the average number of effective competitors from 2007 through 2012 has changed little in each distance category. For example, we found that in 2012 there was an average of 4.1 competitors in markets longer than 1,000 miles, compared to only 3.2 in markets shorter than 250 miles. Based on the HHI, we also found that longer-distance markets are generally less concentrated than shorter-distance markets. The difference exists in large part because longer-distance markets have more viable options for connecting passengers over more hubs. For example, a passenger on a flight from Richmond, Virginia to Salt Lake City, Utah—a distance of about 2,000 miles—could not fly directly, but would have multiple connecting options, including through Hartsfield-Jackson Atlanta International, Chicago O’Hare International, and Dallas/Fort Worth International Airports. By comparison, a passenger from Seattle to Portland, Oregon—a distance of just under 300 miles—has no viable connecting options, nor would connections be as attractive to passengers in short-haul markets. We also found that the number of airlines with a dominant position— carrying at least 50 percent of all domestic passenger traffic—at the largest airports in the U.S. is relatively unchanged from 2007 through 2012. For example, 13 of the 29 large-hub airports in 2012 were dominated by a single airline, up from 12 in 2007. The majority of the large-hub airports were dominated by network airlines and at some of these airports, the dominant airline increased its market share. For example, Delta Air Lines increased its proportion of passenger traffic at Hartsfield-Jackson Atlanta International Airport from about 53 percent in 2007 to nearly 62 percent in 2012. However, American Airlines’ dominant position at Dallas/Fort Worth International Airport declined from about 70 percent to 65 percent from 2007 through 2012 owing to multiple factors, including the entry of low-cost airlines at that airport. Low-cost airlines were dominant at two large-hub airports in 2012. For example, Southwest Airlines increased its dominant position at Chicago Midway International Airport from 74 percent of passenger traffic to 85 percent from 2007 through 2012. Airlines also increased their dominant position at medium- hub airports. In 2012, 15 of the 35 medium-hub airports were dominated by a single airline, up from 11 in 2007, and most of these airports were dominated by low-cost airlines. Nineteen of the 74 small-hub airports in 2012 were dominated by a single airline, up from 13 in 2007, and most of these airports were also dominated by low-cost airlines. Generally, the average market share of the largest airline that operates at the nation’s large-, medium-, and small-hub airports has not changed substantially since 2007. We found that since 2007, the average market share of the largest airline at the 29 large-hub domestic airports increased about 8.5 percent overall, from about 43 percent to just over 46 percent of passenger traffic. However, on average, the largest airline at the nation’s 35 medium-hub airports held about 43 percent of the passenger traffic in 2007 and just over 46 percent in 2012. The average market share of the largest airline at the 74 small-hub airports grew the most from approximately 28 percent in 2007 to over 45 percent in 2012. Similarly, using the HHI measure of concentration, we found that larger airports have on average become less concentrated, while smaller airports have become on average slightly more concentrated during the same time period. As the economy has been slowly recovering from the recent recession, demand for air travel has also been recovering. As noted previously, increased demand, along with airline capacity restraint, has contributed to higher fares. We found that consumers paid approximately 4 percent more in real terms, on average, for air travel in 2012 than they did in 2007. For instance, according to DOT, in 2007, the average one-way, inflation-adjusted domestic fare was $182.72 and in 2012 was $190.10. A recent study found that average one-way, inflation-adjusted airfares increased the most at medium-hub airports, and to a lesser extent at large- and small-hub airports from 2007 through 2012. Fares include only the price paid for the ticket purchase and do not include taxes or other fees, such as baggage fees. Specifically, average fares at medium- hub airports, which also experienced the greatest capacity cuts, increased nearly 12 percent, whereas they increased by 8.7 percent on average at large-hub airports and 5.7 percent at small-hub airports over the 6-year period. Fares have continued to rise since 2012. According to DOT, the average domestic airfare increased 5 percent from the third quarter of 2012 to the third quarter of 2013, the latest time period for which data were available. Two factors likely have contributed to higher average airfares from 2007 through 2012: Capacity restraint: As discussed above, beginning in 2007, network airlines reduced domestic capacity in response to challenging economic conditions, and since 2009, available seat miles have not rebounded despite increased demand for air travel. As a result, domestic airlines have been flying fuller flights. According to well- established principles of supply and demand, a reduction in supply with constant or increasing demand will typically lead to higher prices. Medium-hub airports, which have lost the most service, have also seen the greatest airfare increases. In particular, several academic and research experts we spoke to said that airlines are now managing their growth carefully in an attempt to reduce costs and raise yields, which are the average fares paid per passenger mile. Additionally, according to several network airline representatives, airlines are prioritizing high-yield markets. That is, instead of operating at pre- recession levels throughout their networks, network airlines are allocating capacity across markets in order to maintain more capacity on the most profitable routes and limit capacity in markets that are less profitable. According to several academic and research experts we spoke with, the reduction in the number of network airlines as a result of consolidation has made it easier for the remaining airlines to maintain this strategy. Low-cost airlines are exerting less pressure on fares: While low- cost airlines continue to offer lower fares on average than network airlines, recent trends suggest that the fare-reducing effect of entry by the largest low-cost airline in certain markets may be waning. Typically, this phenomenon, which has been referred to as the “Southwest effect,” occurs when a low-cost airline enters or is present in a market and offers lower fares than incumbent airlines, which in turn causes those incumbent competitors to respond by lowering prices in that market. These lower fares may also stimulate new demand and additional traffic. However, a recent Massachusetts Institute of Technology (MIT) study found that Southwest Airlines no longer seems to have the price disciplining effect it once had. From 2007 through 2012, according to the study, fares increased the most at three airports where a significant percentage of flights were operated by Southwest, including Chicago Midway International Airport, Love Field in Dallas, and William P. Hobby Airport in Houston. Since capacity changes at these airports were relatively low, the study suggested that Southwest had demonstrated a widespread pattern of fare increases, but noted that two of the airports saw large increases in average passenger itinerary distance as Southwest expanded the types of markets served from William P. Hobby Airport and Love Field from 2007 through 2012, a change that could explain the higher fares. Nevertheless, the MIT study also noted that average fares increased 23 percent at Chicago Midway International Airport despite a negligible change in passenger itinerary distance. Moreover, Southwest’s strategy has evolved since 2007 as the airline has started to move into larger airports and business markets, thereby contributing to an increase in its costs and average fares. Another trend affecting consumers is the widespread and increasing use of ancillary fees by airlines. As previously discussed, airlines have imposed a variety of ancillary fees on a range of optional services, such as checked and carry-on bags, meals, blankets, early boarding and seat selection. Many airlines rely on these ancillary fees as a substantial portion of their operating revenues. “Unbundling” airfares through the use of ancillary fees can be advantageous from the airlines’ perspective by allowing them to better differentiate their products, boost revenue, and build passenger loyalty. Ancillary fees enable airlines to collect revenues in a manner that, in some cases, more closely matches passengers’ use of airline services to the costs of providing those services. For example, providing service for checked bags is costly for airlines, but only certain customers use the service, so charging for checked bags imposes those costs only on those who choose to use the service. Ancillary fees may also be used as a means to differentiate among passengers and gain more revenues by charging for amenities that some customers may value more highly—and are more willing to pay for—than other customers, even though the cost of providing the amenity may be negligible. For example, by charging a fee to choose a more desirable seat on the aircraft, airlines are able to earn more revenue by providing an enhanced product offering to certain consumers, even though, in this case, the cost of providing the more highly valued seat is negligible. For certain consumers, the ability to pay for particular services they desire, such as Wi-Fi or in-flight entertainment, may represent a positive development. For other consumers, however, certain ancillary fees may not seem truly optional and may increase the overall cost of flying. For example, a family of five traveling on vacation may pay in excess of $100 for checked bags that they could not carry on board in addition to the base airfare. A recent study that investigated the impact of bag fees on airfares between 2008 and 2009 found that when airlines introduced bag fees in 2008, fares fell by about 3 percent, but the total cost of travel was higher for passengers who checked bags. According to our analysis of DOT data, the U.S. airline industry collected nearly $6 billion in baggage fees and reservation cancellation charges in 2012, but, as noted above, the total ancillary revenue collected from passengers is unknown as these fees are not reported separately to DOT. Moreover, consumers may not have full information about the true cost of air travel at the time they purchase their ticket. We previously found that information about ancillary fees is not fully disclosed through all ticket distribution channels used by consumers, making it difficult for them to compare the total cost of flights offered by different airlines. This issue is discussed further in the next section of this report. U.S. airlines, in particular network airlines, have reduced the number of flights they offer passengers in certain markets. For instance, according to our analysis of DOT data, about 1.2 million scheduled domestic flights were eliminated from 2007 through 2013 at large-, medium-, and small- hub, and nonhub airports. Scheduled departures at medium-hub airports decreased nearly 24 percent between 2007 and 2013, compared to a decrease of about 9 percent at large-hub airports and about 20 percent at small-hub airports over the same time period (see fig. 9). Medium-hub airports also experienced the greatest percentage reduction in air service as measured by available seats. As we discussed previously, mergers— which have allowed airlines to reduce redundant capacity and eliminate hub airports—and capacity restraint have resulted in a reduction of flights across the country. In addition, we recently found that air service to small communities has declined since 2007 due, in part, to higher fuel costs, consolidation, and reduced demand from declining populations and as a result of some passengers opting to drive to larger markets with more attractive service (i.e. larger airports in larger cities). A recent MIT study on domestic air service trends reported similar results and found that the prolonged economic downturn, high fuel prices, and capacity restraint contributed to a reduction in service. The study concluded that airlines have been consolidating service at the nation’s largest airports, while cutting back on service to medium- and small-hub airports. We previously found that the percentage of flights that are canceled or diverted has been higher at airports in small rural communities than in large metropolitan areas. One side effect of this trend is long travel delays. According to one academic study, the overall delay time in 2010 for passengers on canceled flights was about 5 hours. This effect is further exacerbated by the increase in domestic passenger load factors from 2007 through 2012 (see fig. 10 below). Flight disruptions, including delays and cancellations, are costly for passengers, airlines, and the economy. In recent years, roughly a quarter of all commercial flights have been delayed or canceled. Given that most flights in recent years tend to have fewer empty seats available, passengers on delayed or canceled flights often have limited opportunities to rebook on other flights, amplifying the disruptions and associated costs. These disruptions may be particularly challenging for smaller communities that have infrequent service. Reduced service at certain airports can be attributed to several factors, including: Elimination of hubs: Merging airlines expect to rationalize their combined networks, including hub locations, over time, in order to achieve economies of scale and reduce inefficiencies. For example, in 2010 we found that the combined United and Continental Airlines would be unlikely to retain eight domestic hubs, especially given the considerable overlap between markets served by United out of Chicago and Continental out of Cleveland. On February 1, 2014, United officially announced that it was substantially reducing operations at the Cleveland Hopkins International Airport, citing lower demand at that airport. Similarly, following its merger with Northwest Airlines, Delta has substantially reduced operations through Memphis International Airport, which had been a hub for Northwest and is located near Hartsfield-Jackson Atlanta International Airport, Delta’s largest hub. Airline strategies that reduce or limit service to certain airports can have consequences for the local communities. For instance, losing connectivity to major domestic and international markets may reduce the vitality of the local economy. In addition, fewer flights can make it more difficult for airports to cover the costs of their infrastructure. Less frequent flights and “up-gauging” aircraft: As discussed above, in some instances, airlines have reduced the frequency of flights on certain routes that are less profitable. Instead of flying multiple daily flights to certain airports on smaller regional aircraft, airlines are flying less frequently but using larger aircraft (referred to as “up-gauging” service) and routing that traffic to large-hub airports. As shown above in figure 9, the percentage reduction in the number of flights exceeds the reduction of available seats from 2007 through 2012, particularly for smaller airports. In other instances, airlines may be eliminating flights altogether on some routes that used smaller planes. According to one airline executive we spoke with, up-gauging may be less convenient for consumers who value frequent flights, but it can be beneficial if the consumer seeks to connect through major hub airports. It can also reduce congestion, leading to fewer flight delays. Our analysis of DOT data shows that the average number of seats per flight has increased slightly for all airports in the country, with the trend of up-gauging most notable at medium- and small-hub airports (see fig. 11). Additionally, reduced service at certain airports has resulted in lost connectivity to the air transportation network for some small communities. According to an MIT study, 23 airports in small communities lost all service between 2007 and 2012. The study found that network airline service at some of the smaller airports was quickly replaced by service from ultra-low cost airlines like Allegiant Air and Spirit Airlines. According to the study, for instance, after US Airways and Northwest Airlines ended service from Arnold Palmer Regional Airport in Latrobe, Pennsylvania, Spirit Airlines entered the airport to provide periodic, non-stop service primarily to leisure destinations in the Southeast. U.S. airlines are seeking to provide greater differentiation between the products they offer by enhancing the travel experience and establishing customer loyalty to a specific airline rather than viewing the product as a commodity. Airlines are increasingly competing on service by investing in technology to enhance their websites, upgrading their fleets and airport lounges, and providing the types of services and on-board amenities that consumers may value. Network airlines are marketing their ability to offer travelers access to more global destinations through expanded networks. Network and low-cost airlines are also purchasing new airplanes as evidenced by new aircraft orders in 2013. Higher fuel prices are driving the demand for newer, more fuel-efficient aircraft, in addition to U.S. airlines’ desire to replace older fleets. Passengers may benefit from these new planes because they are quieter and offer enhanced entertainment options and other in-flight amenities. Some airlines, for example, offer flat- bed seats, premium economy seats, faster Wi-Fi, and larger overhead bins. For certain passengers, some airlines are introducing premium services such as limousine pick-up at the gate. Some airlines are also waiving certain ancillary fees, such as bag fees, in an attempt to increase loyalty to their brand. Moreover, by introducing new technology, including mobile applications, airlines hope to make it easier than ever to purchase tickets from their websites. However, according to both consumer advocacy organizations we spoke with, as network and low-cost airlines compete more on service, attempt to differentiate their brands, and take steps to increase consumer loyalty, an adverse effect is that consumers have less ability to comparison shop and airlines compete less on price. We interviewed 26 stakeholders representing different facets of the airline industry—including academic and research experts, airline representatives, industry trade associations, industry analysts from credit rating agencies and financial services firms, an airport authority, organizations representing the travel industry, and consumer advocacy organizations—to help identify challenges to competition in the airline industry (see app. I for a complete list). Although our analysis found that since 2007 the structure of the market, with respect to the average number of effective competitors and average concentration levels, has not substantially changed in the highest-traffic city-pair markets, many stakeholders we spoke to stressed that there are competition concerns beyond the number of effective competitors and level of concentration. Stakeholders identified a number of challenges, which we categorized into four challenges to airline competition: (1) barriers that prevent airlines from entering the industry or specific markets; (2) the lack of transparency in airline fare and fee disclosure; (3) the effects of consolidation on competition; and (4) emerging international competition concerns. Certain stakeholders also suggested several actions the federal government could take that in their view would help address these challenges— including removing slot controls, which limit the number of takeoffs and landings per hour at four capacity-constrained airports; eliminating airline loyalty programs; and encouraging the completion of federal regulations that would provide consumers greater transparency in fares and fees. A majority of the stakeholders we interviewed cited barriers to entry as a key challenge to competition in the domestic passenger airline industry. Barriers to entry are practices or conditions that impede a firm’s ability to enter either an industry or specific markets within the industry. As entry, or the threat thereof, may have a disciplining effect on incumbent firms’ behavior, barriers that make entry more difficult can hamper competition and enable incumbent firms to charge higher prices without fear that doing so will attract new competitors. The last major airline to enter the U.S. market was Virgin America in 2007. We grouped the entry barriers stakeholders identified into three primary categories: barriers to airport access, diminished cost advantages and access to capital for new airlines, and advantages held by network airlines. Airport access: The inability to obtain access and secure a foothold at some key airports was identified as a major entry barrier by 10 of the 26 stakeholders we interviewed. These stakeholders drew attention to slot controls that are in place at four major congested airports. We have previously found that slot controls allow airports to manage congestion; however, they also limit access for new entrants to some of the busiest airports in the country. According to one industry analyst, difficulty in obtaining landing rights at these airports makes it harder for new airlines to compete for the most lucrative business travelers. As we found in September 2012, airlines that hold slots might underutilize them by, for example, using smaller aircraft instead of giving the slots up, thereby reducing access by new-entrant airlines that could use the slots to offer new service or lower fares and also limiting passenger growth at these airports. In addition to slot controls, 5 stakeholders, including academic and research experts and travel and consumer advocacy organizations pointed to limited access to gates and facilities at other airports as an entry barrier. According to DOT, consolidation has made it increasingly difficult for certain airports to secure financial approvals for infrastructure projects that could allow greater access for new entrants (e.g., by building new gates). According to DOT officials, many airports are bound by majority-in-interest provisions, which in effect give the largest airlines at airports the ability to veto or delay major capital infrastructure projects. The federal government has taken steps to begin to address both slot and airport-access challenges. For example, the DOJ’s settlement approving the American-US Airways merger required the merging airlines to divest slots and open up gates and other facilities to facilitate competition from low-cost airlines at seven key airports around the country. Specifically, American Airlines and US Airways surrendered 104 slots at Ronald Reagan Washington National Airport that have been divested to low-cost airlines Southwest, JetBlue, and Virgin America. The airlines also divested 34 slots at New York’s LaGuardia Airport to Southwest and Virgin America. The settlement is intended to mitigate any anticompetitive effects of the merger by allowing low-cost airlines to expand into new markets and provide the opportunity for more competition to the remaining major network airlines. The Federal Aviation Administration (FAA) is also developing a new rulemaking to replace the current temporary orders limiting scheduled operations at John F. Kennedy International Airport, LaGuardia Airport, and Newark Liberty International Airport and address congestion and delay issues at each of these airports. The draft notice of proposed rulemaking is currently under review at the Office of Management and Budget. Additionally, DOT’s Office of the Secretary of Transportation and FAA attempt to advance airline competition at larger commercial service airports through their review of airport competition plans. Large- or medium-hub airports, at which one or two airlines control 50 percent or more of the passenger boardings, are required to submit competition plans to demonstrate how their leasing and financing practices will provide competitive access to airlines attempting to initiate service at those airports. DOT officials reported that the agency reviews approximately 40 airport competition plans or plan updates annually, and since 2011, seven airports became newly subject to competition plan requirements. One stakeholder we spoke with, however, highlighted concerns with the efficacy of competition plans at slot-controlled airports. Diminished cost advantages and access to capital for new entrants: Cost challenges—including limited available capital and the cost of jet fuel—were identified as significant obstacles for new airlines seeking to enter the market by eight stakeholders, including academic and research experts, industry analysts, and several airline representatives. Previously, new airlines were often able to compete with incumbents by exploiting certain cost advantages, such as lower operating costs. However, any cost advantage that a new entrant might have had relative to larger airlines has been muted by the price of fuel, which grew to approximately 30 percent of U.S. airlines’ operating costs in 2012. While new entrants in the market have relied in the past on purchasing older, cheaper aircraft to establish their fleet, the rising cost of fuel has made these less fuel-efficient aircraft cost-prohibitive. Further, Boeing and Airbus have a backlog of aircraft orders, which makes it more difficult for a new airline to obtain new aircraft. Representatives from two airlines and several industry analysts told us that another factor limiting entry has been the difficulty new airlines have faced in securing the capital needed to expand their fleets since the most recent recession. Network airline advantages: Eleven stakeholders, including academic and research experts, representatives from two airlines, a consumer advocacy organization, and two travel industry organizations emphasized that the advantages the three consolidated network airlines maintain relative to smaller airlines are significant obstacles that make entry into the industry and individual new markets challenging. Specifically, according to an industry analyst and representatives from one airline, new entrants are facing a mature market with few domestic routes that are considered underserved. Further, American, Delta, and United have national networks that provide service to most domestic markets and many international destinations. A new airline that does not provide the same level of service in terms of destinations and frequency may not be able to compete with these airlines. Airline loyalty programs and corporate discounts, according to seven stakeholders, also create entry barriers. Representatives from one airline and a travel industry organization said that the corporate account agreements that network airlines create with Fortune 500 companies, which provide these companies discounts in exchange for a percentage of their corporate travel, can place smaller airlines that cannot provide such discounts at a significant disadvantage. Several stakeholders agreed and academic research supports the idea that airline loyalty programs, such as frequent flyer programs, can incentivize consumers to concentrate their flying with one airline to accumulate miles and rewards, even though other airlines’ fares may be more competitively priced. Six stakeholders, including representatives from a low-cost airline, academic and research experts, and an industry trade association, also drew attention to an incumbent network airline’s ability to respond to entry, or the threat thereof, in a particular market by dramatically increasing capacity, thereby lowering fares and hindering a new airline’s ability to profitably serve the route. Another key challenge cited by six stakeholders—including an academic and research expert, two consumer advocacy organizations, and three travel industry organizations—is the incomplete information about the total cost of air travel (e.g., taxes, ancillary fees, and surcharges) available to consumers at the time they purchase their ticket. These stakeholders emphasized that competition between airlines is undermined when consumers have limited ability to shop comparatively and make decisions about their air travel purchases without full fare and fee information. In 2011, DOT issued a final rule requiring that an airline’s most prominently advertised airfare must be the full cost of the ticket, with government taxes, mandatory fees, and optional surcharges included. DOT officials also told us that there has been an increase in complaints regarding ancillary fees since airlines first imposed fees for checked baggage. We previously found, for example, that information about ancillary fees is not fully disclosed through all ticket distribution channels (e.g., online travel agencies like Expedia.com and Travelocity), making it difficult for consumers to compare the total cost of flights offered by different airlines. We recommended in 2010 that DOT improve the disclosure of baggage fees and policies to passengers by requiring airlines to disclose fees consistently across all ticket distribution channels used by airlines. In May 2014, DOT issued a notice of proposed rulemaking to, among other things, make airline pricing of ancillary fees more transparent. Another rulemaking would require more detailed reporting of ancillary fees to DOT. The airline industry has generally opposed this effort, arguing that expanded reporting is too complex to be economically justified and could be used to impose new taxes. Six stakeholders we spoke with also raised concerns with the International Air Transport Association’s (IATA) Resolution 787. Subject to approval by DOT, Resolution 787 proposes a technical standard for the pricing and sale of airline tickets using Extensible Markup Language (XML). Airlines believe the XML template will make it easier for airlines to offer consumers products in a “shopping basket” approach that includes the base fare as well as fees for features such as checked bags, preferred seats, in-flight Wi-Fi, and airport lounge access. However, an academic and research expert, two consumer advocates, and three travel industry organizations we spoke to raised concerns about the extent to which personal data provided by consumers will determine what travel options an airline may offer. Recently, a coalition of approximately 400 travel industry and consumer groups, including several of the stakeholders we spoke with, withdrew their objection to Resolution 787 and reached a negotiated agreement with IATA that limits Resolution 787 to a technical standard that would, if ultimately developed, be transparent and voluntary for the industry. DOT tentatively approved Resolution 787 in May 2014 and found that, subject to certain conditions, approval of IATA Resolution 787 would be in the public interest and directed interested parties to show why DOT should not approve the resolution. DOT’s tentative conditions of approval include adding several safeguards to ensure that consumers shopping for air travel could not be required to disclose personal information and specifying that airlines and ticket agents would be obligated to follow their published privacy policies on the sharing and storing of personal information. Industry stakeholders were divided with regard to the effect increasing consolidation in the airline industry has had on competition, specifically in light of the merger between American Airlines and US Airways. Seven stakeholders, including several network and low-cost airlines and consumer advocacy organizations, maintain that the settlement allowing the American and US Airways merger to go forward was not in the public interest. Specifically, while the settlement provides for slot or gate divestitures at seven major airports around the country, several consumer advocacy organizations maintain that the divestitures will not adequately protect against higher fares and fees and reduced service to smaller communities that may result from the merger. Two airlines—network and low-cost—also criticized the DOJ for narrowly focusing on divesting slots at several airports to low-cost airlines, while another stakeholder criticized the settlement’s focus on slot divestitures without the same attention to gate availability. However, five other stakeholders, including two industry trade associations and three industry analysts, strongly supported the US Airways and American merger—along with consolidation in general—as a means to enhance the financial viability of the airlines. For example, one analyst told us that recent mergers are a market response to the financial challenges airlines experienced during the recent recession, and an industry trade association emphasized that the opportunity for airlines to combine operations has been critical to the industry’s recent success. Although our analysis focused on domestic airlines and markets, several stakeholders raised concerns about potential international challenges to competition. Two consumer advocacy organizations and two travel industry organizations highlighted the growth of immunized international alliances, whereby an airline may market seats on partners’ flights, as a global development that has implications for domestic competition. DOT has exercised its statutory authority to grant certain groups of airlines within these alliances immunity from U.S. antitrust laws affecting international transportation, thereby permitting participants, for example, to coordinate on prices, scheduling, and marketing. Grants of immunity are made by the Secretary of Transportation on a discretionary basis. Several stakeholders we spoke with raised concerns that the antitrust immunity airlines in these alliances have been granted to cooperate in international markets may lead to cooperative behavior in domestic markets. For example, on trans-Atlantic routes where airlines would otherwise offer competing non-stop flights, competition may be limited and consumers adversely affected if the airlines are partners in an immunized alliance. A study by the Transportation Research Board reported that U.S. airlines that are less capable of providing international service could become weaker competitors, as they may be less likely to emerge or survive as challengers to network airlines that are part of international alliances. According to DOT, since approving the first immunized alliance between Northwest and KLM in 1993, DOT’s policy on airline alliances recognizes that, although the industry is among the most inherently global of all network industries, it is still subject to regulations that limit how airlines can adapt to market conditions. Unlike many other global industries, according to DOT, the airline industry cannot pursue mergers among airlines based in different countries due to strict ownership and control laws maintained by many countries around the world. According to DOT, antitrust immunity is a method for allowing cooperative agreements between U.S. and foreign airlines to achieve public benefits that would otherwise not be possible. Other stakeholders focused on international competition with regard to the ability of U.S. airlines to compete with foreign airlines. Specifically, representatives from an industry trade association and representatives from a network airline told us that U.S. airlines may be at a competitive disadvantage in relation to several airlines in China and the Middle East (e.g., China Airlines, Etihad Airways, and Emirates) that they assert receive government support. Representatives from one network airline told us that by losing traffic to these airlines abroad, domestic network service could be affected as well as the financial health of the domestic airline industry. Additionally, the Future of Aviation Advisory Committee report to the Secretary of Transportation noted that U.S. airlines are facing restrictive aviation agreements in growing markets in Asia and South America and face entry barriers—such as slot restrictions, air space limitations, and local ground-handling rules—that increase their operating costs and stifle competition. Stakeholders offered contrasting perspectives regarding the role of the federal government in addressing the competition challenges they identified. Actions recommended by stakeholders who supported a federal role in addressing competition challenges were in most cases directed at narrow issues within the industry, as federal action is inherently limited in a deregulated industry. Further, because the structure of the airline industry is evolving, the full competitive effects of industry consolidation are unknown. Certain stakeholders we spoke with, including an industry analyst and airline representatives, were opposed to any federal actions to further enhance competition in the market. For example, according to representatives from one network airline, concerns about a competitive environment dominated by four large airlines do not mean that the federal government should interfere with the mechanics of the market. Conversely, seven stakeholders were supportive of a federal role, but prioritized different actions to address concerns about competition. A majority of stakeholders did not identify any of the potential actions as the most critical for the federal government to take. Reducing barriers to entry: Several industry stakeholders drew attention to reducing barriers to entry. For example, one airport authority said that slot controls should be removed to maximize capacity and encourage competition at New York and Washington, D.C. airports. We have also recommended that FAA improve its administration of the slot control rules to enhance competition through greater transparency and airline access to slots. Additionally, two stakeholders supported either eliminating airline loyalty programs or taxing their benefits as a means to increase competition among airlines. The Internal Revenue Service announced in 2002 that it does not plan to pursue a tax enforcement program regarding promotional benefits such as frequent flyer miles. As a result, employees are currently able to keep mileage earned from flights that are paid for by their employer without being taxed for the value. Taxing benefits from airline reward programs, according to these stakeholders, would enhance competition by enabling airlines to compete route-to-route without regard to the extra benefit of frequent flyer miles. Increasing fare transparency: The three travel industry organizations and two consumer advocacy organizations we interviewed supported a stronger federal role in increasing transparency and competition within the industry by encouraging DOT to continue to finalize its proposed rulemaking on reporting ancillary revenue to help ensure that passengers are aware of the full cost of travel—including ancillary fees—at the time of purchasing a ticket. Mitigating anticompetitive effects of consolidation: To address any adverse effects industry consolidation has had on competition, two consumer advocacy and travel industry organizations argued that Congress should repeal a preemption provision in the Airline Deregulation Act of 1978, as amended. This provision prohibits states or their political subdivisions from enacting or enforcing any law, regulation, rule, or other provision having the force and effect of law related to the price, route, or service of an airline. The consumer advocacy and travel industry organizations argued that such a repeal would allow states to sue airlines to enhance consumer protections. Since the industry was deregulated in 1978, air transportation has been almost exclusively under federal oversight. The Airline Deregulation Act’s preemption provision has been interpreted by the U.S. Supreme Court to preempt regulation of airline fare advertising under state consumer protection laws. The consumer advocacy and travel industry organizations argued that lifting this preemption restriction and allowing state attorneys general to sue airlines would increase discipline and benefit consumers. Additionally, one academic and research expert and a travel industry organization we spoke with recommended that DOJ conduct post-merger analyses to determine whether mergers have delivered the benefits, including efficiencies and cost savings, airlines have promised in advance. Addressing global competition challenges: Several consumer advocacy and travel industry organizations recommended that the federal government place more scrutiny on international alliances by conducting regular reviews to evaluate the effects of antitrust immunity. Two stakeholders supported a federal role in helping U.S. airlines compete in the global market as they assert government support and minimal regulatory burdens in some foreign countries give airlines like Etihad Airways, Emirates, and Qatar Airways from Persian Gulf states a competitive advantage over U.S. airlines. One industry trade association and representatives from one network airline we spoke with were supportive of policies that would enable U.S. airlines to more effectively compete with international airlines. These stakeholders advocated ensuring that U.S. Open Skies policy—agreements that the U.S. signs with other countries to allow airlines access to international markets—contain provisions that support high labor standards and protect U.S. aviation jobs, and reject any new or increased taxes or fees on the airline industry. We provided a draft of this report to DOJ and DOT for review and comment. Both DOJ and DOT provided technical comments that we incorporated as appropriate. We are sending copies of this report to the Attorney General of the United States, the Secretary of Transportation, and the appropriate congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or members of your staff have questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix III. The objectives of this report were to examine (1) how the financial health of the U.S. airline industry has changed since 2007; (2) changes to the structure of the market since 2007; (3) how consumers have been affected by changes in the financial health and market structure of the U.S. airline industry; and (4) what stakeholders believe are the key challenges to airline competition and actions the federal government could take to address these challenges. To examine changes to the financial health of the domestic airline industry since 2007, we analyzed airline financial and operational data, reviewed relevant studies, and interviewed industry experts. We divided the airline industry into network, low-cost, and regional airlines. While there is variation in the size and financial condition of the airlines within these groups, there are more similarities than differences. The eight network airlines have adopted hub-and-spoke network models, which can be more expensive to operate than point-to-point service. Low-cost airlines are typically smaller, and generally employ a less costly point-to- point service model. The eight low-cost airlines (AirTran Airways, Allegiant Air, America West Airlines, Frontier Airlines, JetBlue, Southwest Airlines, Spirit Airlines, and Virgin America) had consistently lower unit costs than the eight network airlines (Alaska Airlines, American Airlines, Continental Airlines, Delta Air Lines, Hawaiian Airlines, Northwest Airlines, United Airlines and US Airways). We also included the 30 regional airlines that account for 99 percent of passengers on regional airlines in 2012. These airlines operate smaller aircraft and provide service to smaller communities. We utilized Department of Transportation (DOT) Form 41 financial and operational data submitted to DOT by U.S. passenger airlines for the years 2007 through 2012 as these were the most recent and complete annual data. All dollar figures in this report are nominal unless otherwise noted. We analyzed these data using various metrics for airline financial performance identified from our previous work. We obtained these data from Diio, a private contractor that provides online access to U.S. airline financial, operational, and passenger data with a query-based interface. To assess the reliability of these data, we reviewed the quality control procedures used by Diio and DOT, interviewed DOT officials responsible for data collection efforts, and subsequently determined that the data were sufficiently reliable for our purposes. We also reviewed government and expert data analyses, research, and studies, as well as our own previous studies. The expert research and studies, where applicable, were reviewed by a GAO economist or were corroborated with additional sources to determine that they were sufficiently reliable for our purposes. Finally, we conducted interviews with airline representatives, industry trade associations, industry analysts at credit rating agencies and financial services firms, and other industry stakeholders (see table 4 below). The analysts and experts were identified and selected based on a literature review, prior GAO work, and recommendations from within the industry. See below for a description of our method for selecting these stakeholders. To examine how the airline industry’s market structure has changed since 2007, we analyzed data from DOT’s Origin and Destination Survey, which includes fare and itinerary information on every 10th airline ticket sold; reviewed academic studies assessing competition; and interviewed DOT officials, airline representatives, and aviation industry stakeholders. The data sample comprises approximately 91,000 airport-pair markets for each calendar year 2007 through 2012. We excluded tickets with international, Alaskan, or Hawaiian destinations. We eliminated Alaskan and Hawaiian destinations because cost and competitive conditions involving these destinations are likely to be considerably different than routes within the continental U.S. and therefore it was not appropriate to include these types of routes in our analysis. Since only the airline issuing the ticket is identified, regional airline traffic is counted under the network parent or partner airline. To assess the reliability of these data, we reviewed the quality control procedures used by Diio, our data provider, and DOT, interviewed DOT officials responsible for data collection efforts, and subsequently determined that the data were sufficiently reliable for our purposes. To analyze changes in the number of effective competitors and market concentration, we performed a number of steps to aggregate and filter the data. First, since the ticket data contain one-way-direction ticket information, we combined data on one-way trips traveling in either direction for a given market defined by two cities (or airports). For example, we combined the traffic going from Lehigh Valley International Airport (ABE) to Abilene Regional Airport (ABI) with traffic travelling from ABE to ABI to obtain a total passenger count of all traffic between the two airports. Second, we filtered the data to include only those airport-pair markets with at least 520 passengers in one direction or 1,040 passengers for round-trip traffic because markets with fewer passengers would be too small to ensure statistical accuracy. This filter removed 6 percent of the passengers from the full dataset. Next, we defined an effective competitor as an airline with at least 5 percent of total traffic. These are the same minimum passenger and market share filters that we have previously used to assess whether an airline has sufficient presence in a market to affect competition. Finally, we created separate market- level data sets based on two different market definitions: 1) airport-pair and 2) city-pair as defined by DOT. The most straight-forward definition of a market is the airport-pair, or travel between two airports. However, the largest cities often contain several commercial airports that compete for passengers, and are in some cases treated as a single destination. This analysis focused on domestic city-pair markets, which represent air transportation between two cities. City-pair markets are typically viewed as the basic, relevant market for airline travel in the U.S. For each version of the data, we calculated (1) the proportion of total passengers carried by each airline in the market; (2) the weighted and un- weighted average number of effective competitors (defined as having at least 5 percent of total passenger traffic in the market); and (3) the average Herfindahl-Hirschman Index (HHI), which is a measure of the level of concentration in a market and provides an indication of changes in the level of competition. HHI is calculated by squaring the market share of each airline competing in the market and then summing the results. For example, a market consisting of four firms—two of which have market shares of 30 percent and two of which have market shares of 20 percent—has an HHI of 2,600 (30 + 20 = 2,600). To analyze changes in the average number of effective competitors and concentration based on the size of the passenger markets, we divided markets into quintiles based on the total passengers across all markets. This means that each quintile had roughly 20 percent of the total passengers, but the number of markets in each quintile varied. These numbers also varied each year. For example, in 2012, we analyzed 10,434 city-pair markets representing about 411 million passengers (see table 2). In addition, we assessed the number of markets dominated by a single airline and the number of non-dominated markets for each quintile. We also divided markets into quintiles based on the total number of markets to gain additional understanding of changes in the smallest markets. This means that we assigned markets into quintiles in such a way that there were equal numbers of markets in each quintile, but varying number of passengers, as shown in table 3 below. In addition, to analyze the data by distance, we grouped the markets into five distance categories: 0-250 miles; 251-500 miles; 501-750 miles; 751- 1000 miles; and 1,001 miles and over. To determine changes in the structure of the market at the airport level, we analyzed DOT T-100 enplanement data for 2007 through 2012 to examine changes in passenger traffic among the airlines at each airport. The T-100 database includes traffic data (passenger and cargo) and operational data for U.S. and foreign airlines traveling to and from the United States. These data represent a 100 percent census of all traffic. To assess the reliability of these data, we reviewed the quality control procedures used by DOT, interviewed DOT officials responsible for data collection efforts, and subsequently determined that the data were sufficiently reliable for our purposes. We also evaluated the airlines’ shares of total airport passengers and calculated an airport-level HHI. To determine how consumers have been affected by changes to the airline industry, we also assessed DOT T-100 enplanement data for 2007 through 2012 on service levels to large-, medium-, small-hub, and nonhub airports, reviewed academic studies and expert research, and conducted interviews with DOT and DOJ officials, six academic and research experts, representatives from five airlines, five travel and consumer advocacy organizations, four industry trade associations, and one airport authority (see table 4 below). Finally, to identify what stakeholders believe are the key challenges to competition and what actions the federal government could take to address these challenges, we interviewed six academic and research experts, representatives from five airlines, five travel and consumer advocacy organizations, five industry analysts, four industry trade associations, and one airport authority. Although the focus of our report is the domestic airline industry, we have included international issues raised by some stakeholders because they viewed these issues as having implications for competition in the domestic airline industry. We identified and selected these stakeholders based on prior GAO work, a review of relevant academic literature, and expertise in their field. The expert research and academic studies, where applicable, were either reviewed by a GAO economist or corroborated with additional sources to determine that they were sufficiently reliable for our purposes. The views of the 26 stakeholders should not be used to make generalizations about the views of all airline competition stakeholders, but do provide a range of perspectives on issues affecting the industry. In addition, we reviewed relevant studies and documentation from these stakeholders, and prior GAO and other government reports. We conducted this performance audit from May 2013 through June 2014 in accordance with generally accepted auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, the following individuals made important contributions to this report: Paul Aussendorf, Assistant Director; Amy Abramowitz; Sara Arnett; Jon Carver; Leia Dickerson; Geoff Hamilton; Delwen Jones; Mitch Karpman; SaraAnn Moessbauer; Josh Ormond; Dae Park; and Justin Reed.
Since 2007, there have been four major airline mergers. As a result of this consolidation, about 85 percent of passengers in the U.S. flew on four domestic airlines in 2013. Certain industry observers have raised concerns that consolidation could have adverse effects on airline competition, such as higher airfares and reduced service. Others argue that consumers stand to benefit from recent changes in the industry as profitable airlines reinvest in new planes and expand their networks. To assist Congress in overseeing changes in the airline industry, GAO was asked to examine the state of competition in the domestic passenger airline industry. This report addresses (1) changes to the financial health of the U.S. airline industry since 2007; (2) changes to the structure of the market since 2007; (3) how consumers have been affected by these changes; and (4) views of stakeholders on the key challenges to airline competition and actions the federal government could take to address these challenges. GAO analyzed airline financial data reported to DOT, as well as DOT passenger itinerary data from 2007 through 2012, the latest year available. GAO interviewed DOT and DOJ officials and 26 stakeholders, selected based on prior work and their expertise in the field, from organizations in sectors such as academia, airlines, consumer advocacy, and finance. Their views are not generalizable, but provide perspectives on a range of competition issues. Both DOJ and DOT provided technical comments on a draft of this report, which were incorporated as appropriate. The U.S. passenger airline industry has returned to profitability following the recent economic recession. From 2007 through 2012, the industry generated approximately $21.7 billion in operating profits despite losing about $5.6 billion in 2008. U.S. airlines maintained approximately $13 billion in cash reserves in 2012. Growth in revenue has driven industry profits, aided by increased passenger traffic, “capacity restraint,” (i.e., limiting the supply of available seats in relation to the level of demand), and revenue from ancillary fees for checking bags and other services. For example, baggage and reservation change fees collected by U.S. airlines increased from about $1.4 billion in 2007 to $6 billion in 2012. Additionally, unlike prior recoveries when airline capacity growth undermined the ability to charge profitable fares, airlines since 2009 have restrained capacity growth even though demand for air travel has risen with the economic recovery. In recent years, the average number of competitors has not substantially changed in markets traveled by the majority of passengers, despite several major airline mergers. From 2007 through 2012, the average number of effective competitors (defined as airlines with more than a 5 percent market share) ranged from 4.3 to 4.5 in the markets with the most passengers. During this period, the average number of effective competitors in markets with the fewest passengers decreased slightly from 3.3 to 3 airlines. While these results reflect market changes that have occurred since several airlines merged, the American-US Airways merger occurred after GAO's analysis. The mergers created larger networks and new connections in some markets. Also, low-cost airlines have expanded since 2007, thereby adding new competitors into some larger markets. The structure of the market will continue to evolve as economic conditions change and the recent airline mergers are fully implemented. In recent years, consumers have experienced higher airfares, additional fees, and fewer flights in certain markets, but also new services and expanded networks. Consumers paid about 4 percent more in real terms, on average, for air travel in 2012 than in 2007, without considering additional fees. The airline industry has reduced flights, especially to smaller airports, and consolidated service at large airports. Airlines have also invested in new aircraft and introduced new services, such as early boarding and entertainment options, in an attempt to differentiate products and increase revenue. Most airline stakeholders cited barriers to market entry, especially restrictions on takeoff and landing slots at four U.S. airports—Washington, D.C.'s Reagan National and three New York City area airports—as a major challenge to airline competition. Barriers that make airline entry more difficult can hamper competition and enable incumbent firms to charge and maintain higher prices. In addition, access to capital and the size advantages of major airlines present a formidable challenge for any new airline. Stakeholders suggested addressing challenges to competition by increasing capacity at congested airports, enhancing fare transparency, and allowing states a greater role in consumer regulation of airlines. However, stakeholders differed regarding the role of the federal government in addressing competition challenges, in part because changes to the airline industry due to consolidation are ongoing.
According to the Institute of Medicine, the federal government has a central role in shaping nearly all aspects of the health care industry as a regulator, purchaser, health care provider, and sponsor of research, education, and training. According to HHS, federal agencies fund more than a third of the nation’s total health care costs. Given the level of the federal government’s participation in providing health care, it has been urged to take a leadership role in driving change to improve the quality and effectiveness of medical care in the United States, including expanded adoption of IT. In April 2004, President Bush called for the widespread adoption of interoperable electronic health records within 10 years and issued an executive order that established the position of the National Coordinator for Health Information Technology within HHS as the government official responsible for the development and execution of a strategic plan to guide the nationwide implementation of interoperable health IT in both the public and private sectors. In July 2004, HHS released The Decade of Health Information Technology: Delivering Consumer-centric and Information-rich Health Care—Framework for Strategic Action. This framework described goals for achieving nationwide interoperability of health IT and actions to be taken by both the public and private sectors in implementing a strategy. HHS’s Office of the National Coordinator for Health IT updated the framework’s goals in June 2006 and included an objective for protecting consumer privacy. It identified two specific strategies for meeting this objective—(1) support the development and implementation of appropriate privacy and security policies, practices, and standards for electronic health information exchange and (2) develop and support policies to protect against discrimination based on personal health information such as denial of medical insurance or employment. In July 2004, we testified on the benefits that effective implementation of IT can bring to the health care industry and the need for HHS to provide continued leadership, clear direction, and mechanisms to monitor progress in order to bring about measurable improvements. Since then, we have reported or testified on several occasions on HHS’s efforts to define its national strategy for health IT. We have recommended that HHS develop the detailed plans and milestones needed to ensure that its goals are met and HHS agreed with our recommendation and has taken some steps to define more detailed plans. In our report and testimonies, we have described a number of actions that HHS, through the Office of the National Coordinator for Health IT, has taken toward accelerating the use of IT to transform the health care industry, including the development of its framework for strategic action. We have also described the Office of the National Coordinator’s continuing efforts to work with other federal agencies to revise and refine the goals and strategies identified in its initial framework. The current draft framework— The Office of the National Coordinator: Goals, Objectives, and Strategies—identifies objectives for accomplishing each of four goals, along with 32 high-level strategies for meeting the objectives, including the two strategies for protecting consumer privacy. Federal health care reform initiatives of the early- to mid-1990s were inspired in part by public concern about the privacy of personal medical information as the use of health IT increased. Congress, recognizing that benefits and efficiencies could be gained by the use of information technology in health care, also recognized the need for comprehensive federal medical privacy protections and consequently passed HIPAA. This law provided for the Secretary of HHS to establish the first broadly applicable federal privacy and security measures designed to protect individual health care information. HIPAA required the Secretary of HHS to promulgate regulatory standards to protect certain personal health information held by covered entities, which are certain health plans, health care providers, and health care clearinghouses. It also required the Secretary of HHS to adopt security standards for covered entities that maintain or transmit health information to ensure that such information is reasonably and appropriately safeguarded. The law requires that covered entities take certain measures to ensure the confidentiality and integrity of the information and to protect it against reasonably anticipated unauthorized use or disclosure and threats or hazards to its security. HIPAA provides authority to the Secretary to enforce these standards. The Secretary has delegated administration and enforcement of privacy standards to the department’s Office for Civil Rights and enforcement of the security standards to the department’s Centers for Medicare and Medicaid Services. Most states have statutes that in varying degrees protect the privacy of personal health information. HIPAA recognizes this and specifically provides that its implementing regulations do not preempt contrary provisions of state law if the state laws impose more stringent requirements, standards, or specifications than the federal privacy rule. In this way, the law and its implementing rules establish a baseline of mandatory minimum privacy protections and define basic principles for protecting personal health information. The Secretary of HHS first issued HIPAA’s Privacy Rule in December 2000, following public notice and comment, but later modified the rule in August 2002. Subsequent to the issuance of the Privacy Rule, the Secretary issued the Security Rule in February 2003 to safeguard electronic protected health information and help ensure that covered entities have proper security controls in place to provide assurance that the information is protected from unwarranted or unintentional disclosure. The Privacy Rule reflects basic privacy principles for ensuring the protection of personal health information. Table 1 summarizes these principles. HHS and its Office of the National Coordinator for Health IT have initiated actions to identify solutions for protecting health information. Specifically, HHS awarded several health IT contracts that include requirements for developing solutions that comply with federal privacy and security requirements, consulted with the National Committee on Vital and Health Statistics (NCVHS) to develop recommendations regarding privacy and confidentiality in the Nationwide Health Information Network, and formed the American Health Information Community (AHIC) Confidentiality, Privacy, and Security Workgroup to frame privacy and security policy issues and identify viable options or processes to address these issues. The Office of the National Coordinator for Health IT intends to use the results of these activities to identify technology and policy solutions for protecting personal health information as part of its continuing efforts to complete a national strategy to guide the nationwide implementation of health IT. However, HHS is in the early stages of identifying solutions for protecting personal health information and has not yet defined an overall approach for integrating its various privacy-related initiatives and for addressing key privacy principles. HHS awarded four major health IT contracts in 2005 intended to advance the nationwide exchange of health information—Privacy and Security Solutions for Interoperable Health Information Exchange, Standards Harmonization Process for Health IT, Nationwide Health Information Network Prototypes, and Compliance Certification Process for Health IT. These contracts include requirements for developing solutions that comply with federal privacy requirements. The contract for privacy and security solutions is intended to specifically address privacy and security policies and practices that affect nationwide health information exchange. HHS’s contract for privacy and security solutions is intended to provide a nationwide synthesis of information to inform privacy and security policymaking at federal, state, and local levels and the Nationwide Health Information Network prototype solutions for supporting health information exchange across the nation. In summer 2006, the privacy and security solutions contractor selected 34 states and territories as locations in which to perform assessments of organization-level privacy- and security-related policies and practices that affect interoperable electronic health information exchange and their bases, including laws and regulations. The contractor is supporting the states and territories as they (1) assess variations in organization-level business policies and state laws that affect health information exchange, (2) identify and propose solutions while preserving the privacy and security requirements of applicable federal and state laws, and (3) develop detailed plans to implement solutions. The privacy and security solutions contractor is to develop a nationwide report that synthesizes and summarizes the variations identified, the proposed solutions, and the steps that states and territories are taking to implement their solutions. It is also to address policies and practices followed in nine domains of interest: (1) user and entity authentication, (2) authorization and access controls, (3) patient and provider identification to match identities, (4) information transmission security or exchange protocols (encryption, etc.), (5) information protections to prevent improper modification of records, (6) information audits that record and monitor the activity of health information systems, (7) administrative or physical security safeguards required to implement a comprehensive security platform for health IT, (8) state law restrictions about information types and classes and the solutions by which electronic personal health information can be viewed and exchanged, and (9) information use and disclosure policies that arise as health care entities share clinical health information electronically. These domains of interest address the use and disclosure and security privacy principles. In June 2006, NCVHS, a key national health information advisory committee, presented to the Secretary of HHS a report recommending actions regarding privacy and confidentiality in the Nationwide Health Information Network. The recommendations cover topics that are, according to the committee, central to challenges for protecting health information privacy in a national health information exchange environment. The recommendations address aspects of key privacy principles including (1) the role of individuals in making decisions about the use of their personal health information, (2) policies for controlling disclosures across a nationwide health information network, (3) regulatory issues such as jurisdiction and enforcement, (4) use of information by non- health care entities, and (5) establishing and maintaining the public trust that is needed to ensure the success of a nationwide health information network. The recommendations are being evaluated by the AHIC work groups, the Certification Commission for Health IT, the Health Information Technology Standards Panel, and other HHS partners. In October 2006, the committee recommended that HIPAA privacy protections be extended beyond the current definition of covered entities to include other entities that handle personal health information. It also called on HHS to create policies and procedures to accurately match patients with their health records and to require functionality that allows patient or physician privacy preferences to follow records regardless of location. The committee intends to continue to update and refine its recommendations as the architecture and requirements of the network advance. AHIC, a commission that provides input and recommendations to HHS on nationwide health IT, formed the Confidentiality, Privacy, and Security Workgroup in July 2006 to frame privacy and security policy issues and to solicit broad public input to identify viable options or processes to address these issues. The recommendations to be developed by this work group are intended to establish an initial policy framework and address issues including methods of patient identification, methods of authentication, mechanisms to ensure data integrity, methods for controlling access to personal health information, policies for breaches of personal health information confidentiality, guidelines and processes to determine appropriate secondary uses of data, and a scope of work for a long-term independent advisory body on privacy and security policies. The work group has defined two initial work areas—identity proofing and user authentication—as initial steps necessary to protect confidentiality and security. These two work areas address the security principle. In January 2007, the work group presented recommendations on performing patient identity proofing to AHIC. The recommendations were approved by AHIC and submitted to HHS. The work group intends to address other key privacy principles, including, but not limited to maintaining data integrity and control of access. It plans to address policies for breaches of confidentiality and guidelines and processes for determining appropriate secondary uses of health information, an aspect of the use and disclosure privacy principle. HHS has taken steps intended to address aspects of key privacy principles through its contracts and with advice and recommendations from its two key health IT advisory committees. For example, the privacy and security solutions contract is intended to address all the key privacy principles in HIPAA. Additionally, the uses and disclosures principle is to be further addressed through the advisory committees’ recommendations and guidance. The security principle is to be addressed through the definition of functional requirements for a nationwide health information network, the definition of security criteria for certifying electronic health record products, the identification of information exchange standards, and recommendations from the advisory committees regarding, among other things, methods to establish and confirm a person’s identity. The committees have also made recommendations for addressing authorization for uses and disclosure of health information and intend to develop guidelines for determining appropriate secondary uses of data. HHS has made some progress toward protecting personal health information through its various privacy-related initiatives. For example, during the past 2 years, HHS has defined initial criteria and procedures for certifying electronic health records, resulting in the certification of over 80 IT vendor products. In January 2007, HHS contractors presented 4 initial prototypes of a Nationwide Health Information Network (NHIN). However, the other contracts have not yet produced final results. For example, the privacy and security solutions contractor has not yet reported its nationwide assessment of state and organizational policy variations. Additionally, HHS has not accepted or agreed to implement the recommendations made in June 2006 by the NCVHS, and the AHIC Privacy, Security, and Confidentiality Workgroup is in the very early stages of efforts that are intended to result in privacy policies for nationwide health information exchange. HHS is in the early phases of identifying solutions for safeguarding personal health information exchanged through a nationwide health information network and has not yet defined an approach for integrating its various efforts or for fully addressing key privacy principles. For example, milestones for integrating the results of its various privacy-related initiatives and resolving differences and inconsistencies have not been defined, and it has not been determined which entity participating in HHS’s privacy-related activities is responsible for integrating these various initiatives and the extent to which their results will address key privacy principles. Until HHS defines an integration approach and milestones for completing these steps, its overall approach for ensuring the privacy and protection of personal health information exchanged throughout a nationwide network will remain unclear. The increased use of information technology to exchange electronic health information introduces challenges to protecting individuals’ personal health information. In our report, we identify and summarize key challenges described by health information exchange organizations: understanding and resolving legal and policy issues, particularly those resulting from varying state laws and policies; ensuring appropriate disclosures of the minimum amount of health information needed; ensuring individuals’ rights to request access to and amendments of health information to ensure it is correct; and implementing adequate security measures for protecting health information. Table 2 summarizes these challenges. Understanding and Resolving Legal and Policy Issues Health information exchange organizations bring together multiple and diverse health care providers, including physicians, pharmacies, hospitals, and clinics that may be subject to varying legal and policy requirements for protecting health information. As health information exchange expands across state lines, organizations are challenged with understanding and resolving data-sharing issues introduced by varying state privacy laws. HHS recognized that sharing health information among entities in states with varying laws introduces challenges and intends to identify variations in state laws that affect privacy and security practices through the privacy and security solutions contract that it awarded in 2005. Several organizations described issues associated with ensuring appropriate disclosure, such as determining the minimum data necessary that can be disclosed in order for requesters to accomplish the intended purposes for the use of the health information. For example, dieticians and health claims processors do not need access to complete health records, whereas treating physicians generally do. Organizations also described issues with obtaining individuals’ authorization and consent for uses and disclosures of personal health information and difficulties with determining the best way to allow individuals to participate in and consent to electronic health information exchange. In June 2006, NCVHS recommended to the Secretary of HHS that the department monitor the development of different approaches and continue an open, transparent, and public process to evaluate whether a national policy on this issue would be appropriate. Ensuring Individuals’ Rights to Request Access and Amendments to Health Information to Ensure It Is Correct As the exchange of personal health information expands to include multiple providers and as individuals’ health records include increasing amounts of information from many sources, keeping track of the origin of specific data and ensuring that incorrect information is corrected and removed from future health information exchange could become increasingly difficult. Additionally, as health information is amended, HIPAA rules require that covered entities make reasonable efforts to notify certain providers and other persons that previously received the individuals’ information. The challenges associated with meeting this requirement are expected to become more prevalent as the numbers of organizations exchanging health information increases. Implementing Adequate Security Measures for Protecting Health Information Adequate implementation of security measures is another challenge that health information exchange providers must overcome to ensure that health information is adequately protected as health information exchange expands. For example, user authentication will become more difficult when multiple organizations that employ different techniques exchange information. The AHIC Confidentiality, Privacy, and Security Workgroup recognized this difficulty and identified user authentication as one of its initial work areas for protecting confidentiality and security. To increase the likelihood that HHS will meet its strategic goal to protect personal health information, we recommended in our report that the Secretary of Health and Human Services define and implement an overall approach for protecting health information as part of the strategic plan called for by the President. This approach should: 1. Identify milestones and the entity responsible for integrating the outcomes of its privacy-related initiatives, including the results of its four health IT contracts and recommendations from the NCVHS and AHIC advisory committees. 2. Ensure that key privacy principles in HIPAA are fully addressed. 3. Address key challenges associated with legal and policy issues, disclosure of personal health information, individuals’ rights to request access and amendments to health information, and security measures for protecting health information within a nationwide exchange of health information. In commenting on a draft of our report, HHS disagreed with our recommendation and referred to “the department’s comprehensive and integrated approach for ensuring the privacy and security of health information within nationwide health information exchange.” However, in recent discussions with GAO, the National Coordinator for Health IT agreed with the need for an overall approach to protect health information and stated that the department was initiating steps to address our recommendation. Further, since our report was issued, HHS has reported that it has undertaken additional activities to address privacy and security concerns. For example: ● NCVHS’s subcommittee on privacy and confidentiality is drafting additional recommendations for the Secretary of HHS regarding the expansion of the HIPAA Privacy Rule coverage to entities that are not currently covered. The recommendations are expected to be presented to the NCVHS at its meeting later this month. ● The privacy and security solutions contractor is in the process of analyzing and summarizing 34 states’ final assessments of organization-level business practices and summaries of critical observations and key issues. Its initial assessment identified challenges that closely parallel those identified in our report. HHS plans to finalize the findings and final reports from the contractor after the contract ends at the end of this month. ● HHS awarded another contract, the State Alliance for e-Health, which is intended to address state-level health IT issues, including privacy and security challenges and solutions. In January 2007, the alliance identified the protection of health information as a guiding principle for its work. The alliance plans to identify privacy practices and policies to help ensure the protection of personal health information exchanged within a nationwide health information network. In summary, concerns about the protection of personal health information exchanged electronically within a nationwide health information network have increased as the use of health IT and the exchange of electronic health information has also increased. HHS and its Office of the National Coordinator for Health IT have initiated activities that, collectively, are intended to protect health information and address aspects of key privacy principles. While progress continues to be made through the various initiatives, it remains highly important that HHS define a comprehensive approach and milestones for integrating its efforts, resolve differences and inconsistencies among them, fully address key privacy principles, ensure that recommendations from its advisory committees are effectively implemented, and sequence the implementation of key activities appropriately. If implemented properly, HHS’s planned actions could help improve efforts to address key privacy principles and the related challenges, and ensure that the department meets its goal to safeguard personal health information as part of its national strategy for health IT. Mr. Chairman and members of the subcommittee, this concludes our statement. We would be happy to respond to any questions that you or members of the subcommittee may have at this time. If you have any questions on matters discussed in this testimony, please contact Linda D. Koontz at (202) 512-6240 or Valerie C. Melvin at (202) 512-6304 or by e-mail at koontzl@gao.gov or melvinv@gao.gov. Other key contributors to this testimony include Amanda C. Gill, Nancy E. Glover, M. Saad Khan, David F. Plocher, and Teresa F. Tucker. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In April 2004, President Bush called for the Department of Health and Human Services (HHS) to develop and implement a strategic plan to guide the nationwide implementation of health information technology (IT). The plan is to recommend methods to ensure the privacy of electronic health information. GAO was asked to summarize its January 2007 report. The report describes the steps HHS is taking to ensure privacy protection as part of its national health IT strategy and identifies challenges associated with protecting electronic health information exchanged within a nationwide health information network. HHS and its Office of the National Coordinator for Health IT have initiated actions to identify solutions for protecting personal health information through several contracts and with two health information advisory committees. For example, in late 2005, HHS awarded several health IT contracts that include requirements for addressing the privacy of personal health information exchanged within a nationwide health information exchange network. HHS's privacy and security solutions contractor is to assess the organization-level privacy- and security-related policies, practices, laws, and regulations that affect interoperable health information exchange. In June 2006, the National Committee on Vital and Health Statistics made recommendations to the Secretary of HHS on protecting the privacy of personal health information within a nationwide health information network and, in August 2006, the American Health Information Community convened a work group to address privacy and security policy issues for nationwide health information exchange. While its activities are intended to address aspects of key principles for protecting the privacy of health information, HHS is in the early stages of its efforts and has therefore not yet defined an overall approach for integrating its various privacy-related initiatives and addressing key privacy principles, nor has it defined milestones for integrating the results of these activities. GAO identified key challenges associated with protecting electronic personal health information in four areas.
The 340B program was created in 1992 following the enactment of the Medicaid Drug Rebate Program and gives certain safety net providers discounts on outpatient drugs comparable to those made available to state Medicaid agencies. HRSA, through its Office of Pharmacy Affairs, is responsible for administering and overseeing the 340B program, which according to federal standards, includes designing and implementing necessary policies and procedures to enforce agency objectives and assess program risk. These policies and procedures include internal controls that provide reasonable assurance that an agency has effective and efficient operations and that program participants are in compliance with applicable laws and regulations. Eligibility for the 340B program is defined in the PHSA. Entities generally become eligible by receiving one of 10 federal grants or by being one of six hospital types. (See appendix II for a complete list of covered entity types and their eligibility requirements.) To participate in the 340B program, eligible entities must register with HRSA and be approved. Entity participation in the 340B program has grown over time to include over 16,500 covered entity sites (see fig. 1). Federal grantees are eligible for the 340B program by virtue of receiving certain federal grants administered by different agencies within HHS. Eligible grantees include clinics that offer primary and preventive care services, such as FQHCs, family planning clinics, and clinics that target specific conditions or diseases that raise public health concerns or are expensive to treat, such as hemophilia treatment centers. Participating clinics may offer eligible services at one or multiple sites. They also include state-operated ADAPs, which serve as a “payer of last resort” to cover the cost of providing HIV-related medications to certain low-income individuals. Hospitals eligible for the 340B program include certain DSH hospitals, children’s hospitals, freestanding cancer hospitals, rural referral centers, sole community hospitals, and critical access hospitals. While DSH hospitals have been eligible for the program since its inception, children’s hospitals became eligible in 2006, and the remaining hospital types became eligible through PPACA. Hospital eligibility for the 340B program has more elements than that of federal grantees, because unlike federal grantees, hospitals do not qualify for the program based on receipt of a federal grant. Rather, they must meet certain requirements intended to ensure that they perform a government function to provide care to the medically underserved. First, hospitals generally must meet specified DSH adjustment percentages to qualify; however, critical access hospitals are exempt from this requirement. Additionally, all hospitals must be (1) owned or operated by a state or local government, (2) a public or private, nonprofit corporation that is formally delegated governmental powers by a unit of state or local government, or (3) a private, nonprofit hospital under contract with a state or local government to provide health care services to low income individuals who are not eligible for Medicaid or Medicare. Clinics and other sites affiliated with a hospital, but not located in the main hospital building, are eligible to participate in the 340B program if they are an integral part of the hospital, which HRSA has defined as reimbursable sites on the hospital’s most recently filed Medicare cost report. All drug manufacturers that supply outpatient drugs are eligible to participate in the 340B program and must participate if they want their drugs covered by Medicaid. To participate, manufacturers are required to sign a pharmaceutical pricing agreement with HHS in which both parties agree to certain terms and conditions and submit this agreement to HRSA. Covered entities typically purchase and dispense 340B drugs through pharmacies and can structure their programs in different ways. Entities can have (1) an in-house pharmacy model, in which the pharmacy is housed within the covered entity, (2) a contract pharmacy model, in which the entity contracts with an outside pharmacy to dispense drugs on their behalf, or (3) both. Historically, only covered entities that did not have an in-house pharmacy were allowed to contract with a single outside pharmacy to provide services. In March 2010, however, HRSA issued guidance allowing all covered entities—including those that have an in- house pharmacy—to contract with multiple outside pharmacies. Some covered entities use HRSA’s Pharmacy Services Support Center (PSSC) or private companies that provide technical assistance, information technology, and other services to help develop, implement, and manage their 340B pharmacy program. The 340B price for a drug—often referred to as the 340B ceiling price—is based on a statutory formula and represents the highest price a drug manufacturer may charge covered entities; however, the provision establishing the 340B pricing formula indicates that manufacturers may sell a drug at a price that is lower than the ceiling price. As such, covered entities may negotiate prices below the ceiling price. Manufacturers are responsible for calculating the 340B price on a quarterly basis. Occasionally the formula results in a negative price for a 340B drug. In these cases, HRSA has instructed manufacturers to set the price for that drug at a penny for that quarter—referred to as HRSA’s penny pricing policy. Covered entities must follow certain program requirements as a condition of participating in the 340B program. For example, covered entities are prohibited from diverting any drug purchased at a 340B price to an individual who does not meet HRSA’s current definition of a patient. This definition was issued in 1996 and outlines three criteria which generally state that diversion occurs when 340B discounted drugs are given to individuals who are not receiving health care services from covered entities or are only receiving non-covered services, such as inpatient hospital services, from covered entities. (See table 1 for more information on HRSA’s definition of a 340B patient.) Covered entities are permitted to use drugs purchased at the 340B price for all individuals who meet the definition of a patient, whether or not they are low income, uninsured, or underinsured. Covered entities also are prohibited from subjecting manufacturers to duplicate discounts whereby drugs prescribed to Medicaid patients are subject to both the 340B price and a rebate through the Medicaid Drug Rebate Program. To avoid duplicate discounts, covered entities can either purchase drugs for Medicaid patients outside the 340B program, in which case the state Medicaid agency may claim the rebate, or they can use drugs purchased at 340B prices, in which case the agency may not claim the rebate. Covered entities that decide to use 340B drugs for Medicaid patients must notify HRSA so that it can coordinate with state Medicaid agencies for billing purposes. Further, certain covered entities— DSH hospitals, children’s hospitals, and freestanding cancer hospitals— are prohibited from purchasing outpatient drugs through any group purchasing organization (GPO). However, they may purchase drugs through the specified HRSA contractor, the Prime Vendor Program (PVP). Rural referral centers, sole community hospitals, and critical access hospitals participating in the 340B program are allowed to purchase outpatient drugs through any GPO. Drug manufacturers also must follow certain 340B program requirements. Specifically, they must sell outpatient drugs to covered entities at or below the statutorily determined price. In addition, HRSA’s nondiscrimination guidance prohibits manufacturers from distributing drugs in ways that discriminate against covered entities compared to other providers. This includes ensuring that drugs are made available to covered entities through the same avenue that they are made available to non-340B providers, and not conditioning the sale of drugs to covered entities on restrictive conditions, which would have the effect of discouraging participation in the 340B program. About half of the covered entities we interviewed reported that they generated 340B program revenue that exceeded drug-related costs—the costs of purchasing and dispensing a drug—and revenue generation depended on several factors. Regardless of the amount of 340B revenue generated or the savings realized through 340B discounts, covered entities generally reported using the 340B program to support or expand access to services. Thirteen of the 29 covered entities we interviewed reported that they generated revenue through the 340B program that exceeded drug-related costs. Of the 16 remaining, 10 did not generate enough 340B revenue to cover all drug-related costs, and 6 covered entities were unable or did not report enough information for us to determine the extent to which they generated 340B revenue due, in part, to their inability to track 340B- specific financial information. In general, 340B revenue—whether exceeding drug related costs or not— was generated through reimbursement received for drugs dispensed by 340B in-house or contract pharmacies, though several factors affected the extent to which the covered entities we interviewed generated revenue through the program:  Third-party reimbursement rates: Eighteen of the 29 covered entities we interviewed generated 340B revenue by receiving reimbursement from third-party payers and tracked revenue by payer source. Of the 18, most reported that they generated more 340B revenue from patients with private insurance and Medicare compared to other payers. However, a few of these covered entities reported that their ability to generate 340B revenue from private insurers, including Medicare Part D plans, was decreasing because some insurers were reducing contracted reimbursement rates for drugs based on the entity’s status as a 340B provider. Of the 18 covered entities, most of those that used 340B drugs for Medicaid patients reported that state-determined Medicaid reimbursement rates for these drugs were generally lower, compared to private insurers and Medicare. For example, most reported that Medicaid reimbursement for a 340B drug was set at the price paid for the drug—the 340B price or any lower price—plus a dispensing fee, the latter of which generally did not cover the costs of dispensing the drug. This is typically referred to as reimbursement at actual acquisition cost, which reduces a covered entity’s ability to generate revenue because the state, rather than the entity, benefits from any savings from purchasing drugs at the 340B price. However, a few covered entities generated more 340B revenue through Medicaid than others because they had contractual agreements with their states to share 340B-related savings. Covered entities in two of the five states included in our selection had such agreements. Finally, a majority of the 18 covered entities reported that revenue generated from uninsured patients was lower than that from all other payers.  ADAP status: Factors that affected 340B revenue generation for the five ADAPs we interviewed were different than for other entity types, because unlike other covered entity types, ADAPs do not receive third-party reimbursement for drugs. Rather, ADAPs serve as a “payer of last resort” to cover the cost of providing HIV-related medications to certain low-income individuals who, for example, are uninsured and cannot afford to pay for drugs or who cannot afford their health insurance coverage for drugs. ADAPs can choose to cover costs of drugs by either paying for the drugs directly or by assisting patients with the costs associated with health insurance, including payments for premiums and co-payments or deductibles. When ADAPs purchase drugs directly, they realize 340B savings on drugs—either at the point of purchase or after the fact through manufacturer rebates— but do not generate revenue through the program. When ADAPs assist with patients’ health insurance by paying for co-payments or deductibles on a drug, they sometimes generate revenue by collecting the rebates representing the full 340B discount on a drug for which they may have only paid a portion of the price. Three of the five ADAPs we interviewed reported generating revenue this way.  Ability to leverage resources to access the lowest drug prices: Some of the 29 covered entities we interviewed reported leveraging resources, such as through their larger parent organizations or the PVP, to access drugs at prices below the 340B ceiling price, potentially increasing the difference between the price paid for the drug and the reimbursement received. In addition, some covered entities said they had access to sophisticated information technology—for example by contracting with private companies—or had more staff to help ensure that they were obtaining the lowest priced drugs. As more people gain insurance coverage under PPACA, covered entities may serve more patients with private insurance and Medicaid, which may affect the extent to which they generate 340B revenue. One covered entity located in Massachusetts reported that after the state implemented universal health care, while they received more revenue from reimbursement for low-income patients that gained private insurance, these patients often could not afford associated co-payments or deductibles, and the entity covered these costs. In addition, according to one ADAP we interviewed, as more individuals gain private insurance, the ADAP may increasingly choose to pay for health insurance for patients rather than paying for patients’ drugs directly. This may enable it to generate revenue through the 340B program if it can claim more rebates for drugs for the newly insured patients. According to some covered entities, the impact of serving more Medicaid patients may depend on the Medicaid reimbursement rate that entities receive. For example, patients that gain Medicaid coverage may begin to seek services from covered entities, and for those entities that lose money on Medicaid patients, this may decrease their ability to generate 340B revenue. Conversely, for covered entities that have contractual agreements to share 340B-related savings with their states, the increased Medicaid population may increase their ability to generate 340B revenue. Regardless of the amount of revenue generated through the program, all of the 29 covered entities we interviewed reported that the 340B program, including the up-front savings they realized on the cost of drugs, allowed them to support their missions by maintaining services and lowering medication costs for patients, which is consistent with the purpose of the program. For example, some covered entities reported that they used the 340B revenue generated by certain patients to offset losses incurred from other patients, which helped support the financial stability of the organization and allowed them to maintain services. Further, one covered entity reported that without 340B revenue or the savings on drugs through its participation in the program, it would be unable to offer all the services it provides—both pharmaceutical and clinical—and another reported that it would have to close its outpatient pharmacy without the program. In addition to maintaining services, some covered entities passed 340B savings on to patients by providing lower-cost drugs to uninsured patients. For example, many covered entities determined the amount that a patient is required to pay based on the lower cost of 340B-priced drugs. In addition, the 13 covered entities that generated 340B revenue that exceeded drug-related costs were able to use this revenue to serve more patients and to provide services that they might not have otherwise provided, including additional service locations, patient education programs, and case management, which is also consistent with the purpose of program. One covered entity, for example, reported that it used the revenue generated through the 340B program to provide additional service delivery sites in other parts of the state, which eliminated the need for some patients to travel more than 60 miles to receive services. A few covered entities reported using 340B revenue to support patient and family education programs, such as those where pharmacists provide education on drug interactions. Additionally, one covered entity reported using 340B program revenue to fund a case management program that did not generate any revenue on its own; some services provided through this program included arranging transportation for patients to receive clinical services, coordinating necessary specialty care, and providing translation services. Even though the uses of revenue generated through the 340B program were for similar purposes, some covered entities relied on the program more than others. For example, one FQHC reported that 340B revenue accounted for approximately 5 percent of its total budget, and was used to provide additional services within the organization. However, one hemophilia treatment center reported that 340B revenue accounted for about 97 percent of its total budget and was used to support all of its program operations. According to stakeholders we interviewed, manufacturers’ distribution of drugs at 340B prices generally did not affect providers’ access to drugs. For example, 36 of the 61 program stakeholders we interviewed did not report any effect on covered entities’ or non-340B providers’ access to drugs related to manufacturers’ distribution of drugs at 340B prices. These stakeholders represented a wide range of perspectives on the 340B program, including those representing manufacturers, covered entities, and non-340B providers. The remaining 25 program stakeholders—also representing a wide range of perspectives on the 340B program—reported that manufacturers’ distribution of drugs at 340B prices affected providers’ access to drugs primarily in two situations. The two situations were: (1) for intravenous immune globulin (IVIG), a lifesaving immune deficiency drug, the supply of which is inherently limited; and (2) when there was a significant drop in the 340B price of a drug, which may result in increased demand for the drug by covered entities. Both situations relate to the restricted distribution of drugs, which may occur during shortages or when shortages are anticipated. Stakeholders reported that manufacturers’ restricted distribution of IVIG at 340B prices resulted in 340B hospitals having to purchase some IVIG at higher, non-340B prices in order to meet their demand for the drug. Manufacturers restrict the distribution of IVIG on an ongoing basis, because it is susceptible to shortages. Stakeholders, including five of the seven DSH hospitals we interviewed, reported that because of the restricted distribution of IVIG at 340B prices, 340B hospitals often must purchase some IVIG at higher, non-340B prices to meet their patients’ needs. For example, DSH hospitals reported that when they were unable to access IVIG at 340B prices, additional IVIG was available for purchase at higher, non-340B prices directly from manufacturers, from specialty pharmacies, or from GPOs. Moreover, one DSH hospital reported that it had to purchase about one-third of the IVIG it needed at non-340B prices—paying about $20,000 to $25,000 more per month than what it would have paid if it could have purchased it at 340B prices. Although manufacturers’ distribution of IVIG at 340B prices may not meet 340B hospitals’ demand, some stakeholders, such as drug manufacturers, reported that changes in the amount of IVIG allocated for sale at 340B prices could negatively affect non-340B providers’ access to these drugs. For example, one IVIG manufacturer reported that it restricted its distribution of IVIG by allocating its supply based on the amount of the drug purchased by providers in 2004—allocating 95 percent of its projected monthly sales to non-340B providers and the remaining 5 percent to covered entities at the 340B price. This manufacturer stated that its distribution was fair, and that changing distribution plans to increase the amount of IVIG drugs available at 340B prices could negatively affect non-340B providers’ access to the drugs. However, HRSA officials told us that the allocation of IVIG in this way is not sufficient or fair. Nearly a third of the nation’s hospitals currently participate in the 340B program, and one large GPO we interviewed reported that 340B hospitals tended to be the bigger hospitals in the company’s membership base. Thus, if other manufacturers similarly restrict the distribution of IVIG at 340B prices, it is unlikely that covered entities’ demands will be met at the 340B price. Stakeholders reported that manufacturers’ distribution of drugs at 340B prices also affected providers’ access to drugs when the 340B prices dropped significantly. In certain cases, when the 340B price of a drug dropped, some covered entities stockpiled the drug, which resulted in shortages in the supply for other providers, including other covered entities. For example, two covered entities we interviewed reported challenges accessing drugs when their 340B prices dropped, because other entities purchased large amounts of these drugs. In other cases when the 340B prices dropped, manufacturers restricted the distribution of those drugs at 340B prices to ensure that all providers had equitable access. For example, one manufacturer reported that after the price of an oral contraceptive dropped to a penny as a result of HRSA’s penny pricing policy, it received an order from a covered entity that exceeded the manufacturer’s current national supply by 50 percent. In response, this manufacturer consulted with HRSA to ensure compliance with the agency’s nondiscrimination guidance and restricted the distribution of drugs at 340B prices by allocating its supply based on the projected demand in the market and providers’ past purchasing patterns. HRSA’s oversight of the 340B program is inadequate because it primarily relies on participants’ self-policing to ensure compliance. Changes in the settings where the program is used may heighten concerns about the inadequacy of HRSA’s oversight, and HRSA’s plans for improving oversight are uncertain. HRSA’s oversight of the 340B program is inadequate because it primarily relies on covered entities’ and manufacturers’ self-policing—that is, participants ensuring their own compliance with program requirements. Upon enrollment, HRSA requires both covered entities and manufacturers to certify that they will comply with applicable 340B program requirements and any accompanying agency guidance. As part of this certification, agency officials told us that they expect participants to develop the procedures necessary to ensure compliance, maintain auditable records that demonstrate compliance, and inform HRSA if violations occur. For example, covered entities must develop adequate safeguards to prevent drugs purchased at 340B prices from being diverted to non-eligible patients, such as inventory tracking systems that separately purchase and dispense 340B drugs, and manufacturers must ensure that they properly calculate the 340B price of their drugs. In both cases, program participants must keep auditable records that can show that they have complied with program requirements and produce that documentation if requested by HRSA. HRSA officials told us that covered entities and manufacturers can also monitor each other’s compliance with program requirements, but in practice, participants may face limitations to doing so. For example, two covered entities we interviewed reported that it is difficult to determine whether they have been charged correctly for drugs because manufacturers’ calculations of 340B prices are not transparent—namely, there is no centralized list of 340B prices. An organization representing covered entities also told us that its members had reported this difficulty. Similarly, three drug manufacturers we interviewed reported that, although they sometimes have suspected covered entities of diverting 340B drugs, it is difficult to prove diversion took place. An organization representing some manufacturers explained that, although manufacturers have the authority to audit covered entities, they have only conducted them in egregious circumstances, because agency requirements for these audits—such as a requirement to hire an independent third party to conduct the audits—are costly and administratively burdensome. HRSA’s guidance on key program requirements often lacks the necessary level of specificity to provide clear direction, making it difficult for participants to self-police or monitor others’ compliance and raising concerns that the guidance may be interpreted in ways that are inconsistent with its intent. For example, HRSA’s current guidance on the definition of a 340B patient is sometimes not specific enough to define the situations under which an individual is considered a patient of a covered entity for the purposes of 340B and thus, covered entities could interpret it either too broadly or too narrowly. Stakeholders we interviewed, including those representing covered entities and drug manufacturers, raised concerns that the guidance will be interpreted too broadly leading to cases of unintended diversion—that is, using 340B drugs for individuals who HRSA did not intend as eligible patients, but who may not be clearly prohibited in the guidance. However, one of these stakeholders representing covered entities also noted that, in order to ensure compliance, some entities may adhere to a narrow interpretation of the guidance and thus, limit the benefit of the program for their organization. The agency itself has recognized the need to further specify the definition of a 340B patient to ensure that it is interpreted correctly. For example, HRSA officials told us that the definition currently includes individuals receiving health care services from providers affiliated with covered entities through “other arrangements,” as long as the responsibility for care provided remains with the entity. However, HRSA does not define “other arrangements,” and officials told us that what is meant by responsibility for care also needs to be clarified. As a result of the lack of specificity in the guidance, the agency has become concerned that some covered entities may be broadly interpreting the definition to include individuals such as those seen by providers who are only loosely affiliated with a covered entity and thus, for whom the entity is serving an administrative function and does not actually have the responsibility for care. In addition, HRSA has not issued guidance specifying the criteria under which hospitals that are not publicly owned or operated can qualify for the 340B program. Rather, the agency bases eligibility for these hospitals on the application of broad statutory requirements that they are either formally delegated governmental powers by a unit of a state or local government or have a contract with a state or local government to provide services to low-income individuals who are not eligible for Medicaid or Medicare. HRSA has stated that the determination of whether hospitals meet the first requirement is evaluated by the agency on a case-by-case basis. For the second requirement, HRSA requires a state or local government official and a hospital executive to certify that a contract exists to meet the requirement, but does not require hospitals to submit their contracts for review or outline any criteria that must be included in the contracts, including the amount of care a hospital must provide to these low-income individuals. Therefore, hospitals with contracts that provide a small amount of care to low-income individuals not eligible for Medicaid or Medicare could claim 340B discounts, which may not be what the agency intended. Moreover, HRSA’s nondiscrimination guidance is not specific in the practices that manufacturers should follow to ensure that drugs are equitably distributed to covered entities and non-340B providers when distribution is restricted. Some stakeholders we interviewed, such as covered entities, have raised concerns about the way IVIG manufacturers have interpreted and complied with the guidance in these cases, because covered entities have sometimes had to purchase IVIG at higher, non- 340B prices. Additionally, given current guidance, one stakeholder reported that manufacturers can offer a certain amount of drugs at 340B prices, and while the distribution may not be equitable, still contend that they are complying with the guidance. Although PPACA included a provision prohibiting manufacturers from discriminating against covered entities in the sale of 340B drugs, officials told us they do not have plans to provide any additional specificity to the nondiscrimination guidance. Finally, in the case of HRSA’s penny pricing policy, agency officials told us that it is well understood by 340B stakeholders and manufacturers we interviewed were generally aware of the policy. However, the agency has never formalized guidance in writing and there have been documented cases of manufacturers charging covered entities more than a penny for drugs when the policy should have been in effect. Beyond relying on participants’ self-policing, HRSA engages in few activities to oversee the 340B program and ensure its integrity, which agency officials said was primarily due to funding constraints. For example, HRSA officials told us that the agency verifies eligibility for the 340B program at enrollment, but does not periodically recertify eligibility for all covered entity types. As a result, there is the potential for ineligible entities to remain enrolled in the program. In addition, HRSA officials told us that they do not require a review of the procedures participants put in place to ensure compliance, and, although the agency has the authority to conduct audits of program participants to determine whether violations have occurred, it has never done so. For example, officials said that they do not verify whether covered entities have systems in place to prevent diversion. Also, while HRSA encourages manufacturers to work with the agency to develop processes for restricting the distribution of drugs that are equitable to covered entities and non-340B providers, the agency only reviews manufacturers’ plans to restrict access to drugs at 340B prices if a manufacturer contacts HRSA or concerns with a plan are brought to the agency’s attention. Similarly, although HRSA calculates 340B prices separately from manufacturers, officials told us that, at this time, the agency does not use these calculations to verify the price that manufacturers charge covered entities, unless an entity reports a specific pricing concern. HRSA’s oversight activities are further limited because the agency lacks effective mechanisms to resolve suspected violations and enforce program requirements when situations of non-compliance occur. If covered entities and manufacturers are not able to resolve conflicts on their own, HRSA has had an informal dispute resolution process in place since 1996 through which program participants can request that HRSA review evidence of a suspected violation and the agency then decides whether to initiate the process. However, despite reports by program participants about suspected violations they were unable to resolve on their own, HRSA officials told us that they have only initiated the dispute resolution process twice since its inception. Additionally, HRSA has not issued regulations implementing monetary penalties for non-compliance established by PPACA, and HRSA has rarely utilized the sanctions that existed prior to PPACA. For example, participants found to be in violation of 340B program requirements face termination from the program. Yet according to HRSA officials, since the program’s inception, only two covered entities have been terminated from the program due to findings of program violations and no manufacturer has ever been terminated for this reason. Covered entities also are expected to pay back manufacturers for discounts received while out of compliance, and manufacturers are expected to pay back covered entities for overcharges. However, HRSA has not enforced these expectations and officials were unable to tell us the extent to which repayments have occurred. Because of HRSA’s reliance on self-policing to oversee the 340B program as well as its nonspecific guidance, the agency cannot provide reasonable assurance that covered entities and drug manufacturers are in compliance with program requirements and is not able to adequately assess program risk. As a result, covered entities may be inappropriately claiming 340B discounts from drug manufacturers or qualifying for the program when they should not be, potentially increasing the likelihood that manufacturers will offset providing lower prices to covered entities with higher prices for others in the health care system. Additionally, manufacturers may be charging covered entities more than the 340B price for drugs, which would limit the benefit of the program for these entities. Over time, the settings where the 340B program is used have shifted to more contract pharmacies and hospitals than in the past. According to HRSA officials, the number of covered entities using contract pharmacies has grown rapidly since its new multiple contract pharmacy guidance was issued in March 2010—as of July 2011, there were over 7,000 contract pharmacy arrangements in the program. Hospitals’ participation in the 340B program has also grown markedly in recent years. In 2011, the number of hospitals participating in the program was nearly three times what it was in 2005, and the number of these organizations, including their affiliated sites, was close to four times what it was in 2005 (see fig. 2). Further, although participation in the 340B program has increased among other covered entity types over time, hospitals’ participation in the 340B program has grown faster than that of federal grantees. In 2005, hospitals represented 10 percent of program participants, and as of July 2011, they represented 27 percent. Increased use of the 340B program by contract pharmacies and hospitals may result in a greater risk of drug diversion, further heightening concerns about HRSA’s reliance on participants’ self-policing to oversee the program. Operating the 340B program in contract pharmacies creates more opportunities for drug diversion compared to in-house pharmacies. For example, contract pharmacies are more likely to serve both patients of covered entities and others in the community; in these cases more sophisticated inventory tracking systems must be in place to ensure that 340B drugs are not diverted—intentionally or unintentionally—to non- 340B patients. Also, for a number of reasons, operating the 340B program in the hospital environment creates more opportunities for drug diversion compared to other covered entity types. First, hospitals operate 340B pharmacies in settings where both inpatient and outpatient drugs are dispensed and must ensure that inpatients do not get 340B drugs. Second, hospitals tend to have more complex contracting arrangements and organizational structures than other entity types—340B drugs can be dispensed in multiple locations, including emergency rooms, on-site clinics, and off-site clinics. In light of this and given HRSA’s nonspecific guidance on the definition of a 340B patient, broad interpretations of the guidance may be more likely in the hospital setting and diversion harder to detect. Third, hospitals dispense a comparatively larger volume of drugs than other entity types—while representing 27 percent of participating covered entities, according to HRSA, DSH hospitals alone represent about 75 percent of all 340B drug purchases. The increasing number of hospitals participating in the 340B program has raised other concerns for some stakeholders we interviewed, such as drug manufacturers, including whether all of these hospitals are in need of a discount drug program. Nearly a third of all hospitals in the U.S. currently participate in the 340B program, and HRSA estimates that more may be eligible. The number of hospitals eligible to participate may increase due to PPACA’s Medicaid expansion, because the number of Medicaid patients served by a hospital affects its DSH adjustment percentage—one factor that determines hospital eligibility. Further, one organization we interviewed questioned whether the DSH adjustment percentage is the best measure to determine hospitals’ eligibility for the 340B program, because of research indicating that it may not be an adequate proxy for the amount of uncompensated care a hospital provides. The DSH hospitals we interviewed reported a wide range of payer mixes—with the percentage of Medicaid and uninsured patients ranging from about 15 percent of total patient volume for one hospital to about 85 percent for another. However, payer mix may not be the only factor to consider when identifying hospitals that provide care to the medically underserved and are part of the health care safety net. There is no established definition of a safety net hospital, and some researchers have argued that it should include factors other than payer mix, for example the disproportionate provision of critical services, that are either too expensive or unprofitable for other hospitals to provide, such as emergency room or trauma care. While PPACA’s 340B program integrity provisions address many of the deficiencies in HRSA’s current approach to oversight, the agency has taken few steps to implement these provisions. PPACA requires HRSA to increase oversight of both covered entities and manufacturers, and outlines specific steps for HRSA to take in accomplishing this goal. (See table 2 for the 340B program integrity provisions included in PPACA.) However, according to officials, the agency does not have adequate funding to implement the integrity provisions. Officials also noted that once funding is secured, it could take several years to develop the systems and regulatory structure necessary to implement them. Independent of the provisions in PPACA, HRSA also has recently developed guidance to further specify the definition of a 340B patient. While the Office of Management and Budget completed its review of this definition in April 2011, as of August 2011, HRSA had not yet released it for stakeholder comment. In 2007, HRSA also proposed updating this guidance, but it was never finalized. Even if HRSA implements PPACA’s provisions and updates its definition of a patient, these steps may not be sufficient to address all areas of concern. For example, PPACA specifically requires HRSA to conduct selective audits of manufacturers, but it did not establish the same requirement for audits of covered entities. As such, the effectiveness of HRSA’s oversight of covered entities will, in part, be dependent on what additional steps the agency takes to ensure program integrity. Similarly, if in implementing PPACA’s provision prohibiting manufacturers from discriminating against covered entities in the sale of 340B drugs, HRSA does not add specificity to the existing nondiscrimination guidance, it may be inadequate to ensure that all providers are able to equitably access drugs, particularly when manufacturers restrict the distribution of drugs at 340B prices. Also, as part of its 2007 proposed guidance on the definition of a patient, HRSA requested stakeholder comment on the elements that should be required in private, nonprofit hospitals’ contracts with state or local governments as well as the different situations in which hospitals that are not publicly owned or operated should be formally granted government powers. However, HRSA officials told us that they have not issued additional guidance on these issues, and that they are not addressed in the clarifying guidance on the definition of a patient currently awaiting agency approval. The 340B program allows certain providers within the U.S. health care safety net to stretch federal resources to reach more eligible patients and provide more comprehensive services, and we found that the covered entities we interviewed reported using it for these purposes. However, HRSA’s current approach to oversight does not ensure 340B program integrity, and raises concerns that may be exacerbated by changes within the program. According to HRSA, the agency largely relies on participants’ self-policing to ensure compliance with program requirements, and has never conducted an audit of covered entities or drug manufacturers. As a result, HRSA may not know when participants are engaging in practices that are not in compliance. Furthermore, we found that HRSA has not always provided covered entities and drug manufacturers with guidance that includes the necessary specificity on how to comply with program requirements. There also is evidence to suggest that participants may be interpreting guidance in ways that are inconsistent with the agency’s intent. Finally, participants have little incentive to comply with program requirements, because few have faced sanctions for non-compliance. With the program’s expansion, program integrity issues may take on even greater significance unless effective mechanisms to monitor and address program violations, as well as more specific guidance are put in place. For covered entities, this may be particularly true in settings where there is heightened concern about the opportunities for the diversion of 340B drugs. PPACA outlined a number of provisions that, if implemented, will help improve many of the 340B program integrity issues we identified. For example, PPACA requires HRSA to recertify eligibility for all covered entity types on an annual basis, which would help ensure entities that lose eligibility for the program do not remain enrolled. Additionally, PPACA requires HRSA to develop a formal dispute resolution process, including procedures for covered entities to obtain information from manufacturers, and maintain a centralized list of 340B prices—provisions that would help ensure covered entities and manufacturers are better able to identify and resolve suspected violations. PPACA also requires HRSA to institute monetary penalties for covered entities and manufacturers, which gives program participants more incentive to comply with program requirements. Finally, PPACA requires HRSA to conduct more direct oversight of manufacturers, including conducting selective audits to ensure that they are charging covered entities the correct 340B price. However, we identified other program integrity issues that HRSA should also address. For example, the law does not require HRSA to audit covered entities or further specify the agency’s definition of a 340B patient. While HRSA has developed new proposed guidance on this definition, it is uncertain when, or if, the guidance will be finalized. Because the discounts on 340B drugs can be substantial, it is important for HRSA to ensure that covered entities only purchase them for eligible patients both by issuing more specific guidance and by conducting audits of covered entities to prevent diversion. Additionally, while PPACA included a provision prohibiting manufacturers from discriminating against covered entities in the sale of 340B drugs, HRSA does not plan to make any changes to or further specify its related nondiscrimination guidance. Absent additional oversight by the agency, including more specific guidance, access challenges covered entities have faced when manufacturers’ have restricted distribution of IVIG at 340B prices may continue and similar challenges could arise for other drugs in the future. Also, current HRSA guidance may allow some entities to be eligible for the program that should not be. Hospitals qualify for the 340B program in part based on their DSH adjustment percentage. Even though the PHSA establishes additional eligibility requirements for hospitals that are not publicly owned or operated, these requirements are broad, and HRSA has not issued more specific guidance to implement them. We found that nearly a third of all hospitals in the U.S. are participating in the 340B program, more are currently eligible and not participating, and more may become eligible as Medicaid is expanded through PPACA. As the number of covered entities enrolled in the 340B program increases and more drugs are purchased at 340B prices, there is the potential for unintended consequences, such as cost-shifting to other parts of the health care system. As such, it is important that HRSA take additional action to ensure that eligibility for the 340B program is appropriately targeted. While HRSA officials reported that the agency does not have the resources to implement the PPACA provisions or otherwise increase oversight of the 340B program, limited resources could be prioritized to address areas of greatest risk to the program. PPACA contained several important program integrity provisions for the 340B program, and additional steps can also ensure appropriate use of the program. Therefore, we recommend that the Secretary of HHS instruct the administrator of HRSA to take the following four actions to strengthen oversight: conduct selective audits of 340B covered entities to deter potential diversion; finalize new, more specific guidance on the definition of a 340B patient; further specify its 340B nondiscrimination guidance for cases in which distribution of drugs is restricted and require reviews of manufacturers’ plans to restrict distribution of drugs at 340B prices; and issue guidance to further specify the criteria that hospitals that are not publicly owned or operated must meet to be eligible for the 340B program. In commenting on a draft of this report, HHS stated that it agreed with our recommendations. HHS also had additional comments on several content areas of the report, and we made changes as appropriate to address these comments. (HHS’ comments are reprinted in appendix III.) Finally, HHS provided technical comments, which we incorporated as appropriate. HHS stated that HRSA would continue to work on 340B program integrity efforts and prioritize these efforts based on available funding. HHS also outlined steps that HRSA plans to take in response to each of our recommendations. While we appreciate HHS’ commitment to improving oversight of the 340B program, we are concerned that the steps are not sufficient to ensure adequate oversight. With regard to our first recommendation that HRSA conduct selective audits of covered entities to deter potential diversion, HHS stated that HRSA will continue working with manufacturers to identify and address potential diversion and implement a plan to better educate covered entities about diversion. However, HHS did not state that HRSA will conduct its own audits of covered entities and we reiterate the importance of the agency doing so as part of its ongoing oversight responsibilities. With regard to our second recommendation that HRSA finalize new, more specific guidance on the definition of a 340B patient, HHS stated that HRSA will review the draft of proposed guidance to update the definition and revise this guidance in light of changes in PPACA. While we agree that it may be important for HRSA to consider the impact of PPACA on the definition, given that PPACA became law more than a year ago, and the potential for broad interpretations of current guidance, we encourage HRSA to complete its review in a timely fashion. With regard to our third recommendation, that HRSA further specify its non-discrimination guidance for cases in which distribution of drugs is restricted and require reviews of manufacturers’ plans to restrict distribution of drugs at 340B prices, HHS stated that HRSA will: implement a plan to specify existing policy regarding 340B non- discrimination and drug distribution; provide clearer guidance to manufacturers for working with HRSA and develop specific allocation plans where needed; and continue to work with the Department of Justice when fair, voluntary allocation plans are not developed. However, we are concerned that these steps do not require reviews of manufacturers’ plans to restrict distribution of drugs at 340B prices. Without taking this step, HRSA may not know when manufacturers are inequitably distributing drugs to covered entities and non-340B providers. With regard to our fourth recommendation that HRSA issue guidance to further specify the criteria that hospitals that are not publicly owned or operated must meet to be eligible for the 340B program, HHS stated that HRSA will implement a plan to better educate covered entities on existing criteria for hospital participation in the program and initiate a phased approach to recertifying eligibility for all participating covered entities. Here, we are concerned that these steps do not include further specification of eligibility criteria for hospitals that are not publicly owned or operated, because we determined that additional specification of statutory requirements was needed to ensure that the 340B program is appropriately targeted. We are sending copies of this report to the Secretary of HHS and appropriate congressional committees. In addition, the report is available at no charge on the GAO web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or at draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. 29 27 were selected to take into account certain criteria: Entity Type:  We selected five types of covered entities and specifically interviewed: 7 federally qualified health centers (FQHC), 5 disproportionate share hospital (DSH) hospitals, 5 hemophilia treatment centers, 5 family planning clinics, and 5 AIDS Drug Assistance Programs (ADAP). (See appendix II for a list of all entities eligible to participate in the program.)  We picked these types based on:  variation in operational structure,  variation in services and drugs provided,  high levels of 340B participation,  experience with the program, and  potential difficulty accessing drugs at 340B prices. Location:  We selected entities in five states: Illinois, Massachusetts, Tennessee, Texas, and Utah. States were selected based on variation in a number of factors, including: geography, percent of uninsured individuals, and Medicaid reimbursement policies. We included Massachusetts to gain a better understanding of the potential effect of the Patient Protection and Affordable Care Act (PPACA) health insurance reforms on the 340B program.  We used information provided by trade organizations representing covered entities to help select individual covered entities to interview. 2 additional DSH hospitals were selected based on concerns raised in stakeholder interviews about how these entities were using the program. 6 Selected based on market share and those that produce drugs with reported challenges related to their distribution at 340B prices. Includes 4 manufacturer trade organizations, 1 distributor, and 1 pharmacy benefits manager. Interview details Includes organizations representing providers, including covered entities and non-340B providers: 9 organizations that represent covered entities, including 6 trade organizations and 3 private companies that provide services and information technology to help covered entities establish and manage their 340B programs. 2 organizations representing non-340B providers, including 1 trade organization and 1 non-340B provider. 5 organizations that represent both covered entities and non-340B providers, including 3 trade organizations and 2 group purchasing organizations (GPO). 4 HRSA, the contractors that help administer the 340B program, and the Centers for Medicare & Medicaid Services. Number of sites enrolled by entity type (July 1, 2011) Administering agency within the Department of Health Human Services (HHS) Urban or rural health centers that provide comprehensive community-based primary and preventive care services to medically underserved populations. and Services Administration (HRSA) Receives funds under title V of the Indian Health Care Improvement Act (25 U.S.C. §§1651 et seq.) Provide a variety of health programs to eligible individuals. Receives a grant or contract under Section 1001 PHSA (42 U.S.C. § 300) Provide comprehensive family planning services. Provide screening and treatment for sexually transmitted diseases. Provide treatment for tuberculosis. Receives funds under the Native Hawaiian Health Care Act of 1988 (42 U.S.C. §§ 11701 et seq.) Provide comprehensive health promotion and disease prevention services to Native Hawaiians. Receives financial assistance under title XXVI of the PHSA (42 U.S.C. §§ 300ff-11 et seq.) Serve as a “payer of last resort” to cover the cost of providing HIV-related medications to low- income individuals who are uninsured or underinsured and cannot afford to pay for drugs or who cannot afford their health insurance coverage for drugs. Number of sites enrolled by entity type (July 1, 2011) Administering agency within the Department of Health Human Services (HHS) Description of covered entity type Provide primary care and support services to individuals with HIV or AIDS. Receives a grant under section 501(a)(2) of the Social Security Act (42 U.S.C § 701(a)(2)) Provide medical care to individuals with hemophilia. Black lung clinics Receives funds under Section 427(a) of the Black Lung Benefits Act (30 U.S.C. § 937(a)) Provide medical treatment to individuals disabled from pneumoconiosis (black lung) as a result of their employment at U.S. coal mines. General acute care hospitals paid under the Medicare inpatient prospective payment system. 3,061 Centers for Medicare & Medicaid Services (CMS) Primarily provide services to individuals under 18 years of age. DSH as defined under Section 1886(d)(1)(B) of the Social Security Act (42 U.S.C. § 1395ww(d)(1)(B)) with a DSH adjustment percentage greater than 11.75 Children’s hospital as described under Section 1886 (d)(1)(B)(iii) of the Social Security Act with a DSH adjustment percentage greater than 11.75 Critical access hospital as determined under Section 1820(c)(2) of the Social Security Act (42 U.S.C. § 1395i-4(c)(2)) (no DSH requirement) Located in rural areas, provide 24-hour emergency care services, and have no more than 25 inpatient beds. Isolated from other hospitals by distance, weather, or travel conditions. Number of sites enrolled by entity type (July 1, 2011) Administering agency within the Department of Health Human Services (HHS) Large rural hospitals that provide services for patients from a wide geographic area. Not a unit of another hospital, has a primary purpose of treating or conducting research on cancer. Not all FQHCs receive federal grants. Providers that meet all of the requirements for the FQHC program but do not receive federal grants are referred to as FQHC look-alikes and are eligible to participate in the 340B program. This category includes: FQHC look-alikes; Consolidated Health Centers; Migrant Health Centers; Health Care for the Homeless; Healthy Schools/Healthy Communities; Health Centers for Residents of Public Housing; and Tribal Organizations created under the Indian Self Determination Act (Pub. L.. No. 93-638) and administered by the Indian Health Service. Section 1905(l)(2)(B) of the Social Security Act includes outpatient health programs or facilities operated by an urban Indian organization receiving funds under title V of the Indian Health Care Improvement Act for the provision of primary health services in the definition of FQHCs. In addition to the contact named above, Gerardine Brennan, Assistant Director; Jennie Apter; Kristin Ekelund; Kelli Jones; Dawn Nelson; Rachel Svoboda; and Jennifer Whitworth made key contributions to this report.
The Health Resources and Services Administration (HRSA), within in the Department of Health and Human Services (HHS), oversees the 340B Drug Pricing Program, through which participating drug manufacturers give certain entities within the health care safety net--known as covered entities--access to discounted prices on outpatient drugs. Covered entities include specified federal grantees and hospitals. The number of covered entity sites has nearly doubled in the past 10 years to over 16,500. The Patient Protection and Affordable Care Act (PPACA) mandated that GAO address questions related to the 340B program. GAO examined: (1) the extent to which covered entities generate 340B revenue, factors that affect revenue generation, and how they use the program; (2) how manufacturers' distribution of drugs at 340B prices affects covered entities' or non-340B providers' access to drugs; and (3) HRSA's oversight of the 340B program. GAO reviewed key laws and guidance, analyzed relevant data, and conducted interviews with 61 340B program stakeholders selected to represent a range of perspectives, including HRSA, 29 covered entities, 10 manufacturers and representatives, and 21 others. Selection of stakeholders was judgmental and thus, responses are not generalizable. Thirteen of the 29 covered entities we interviewed reported that they generated 340B program revenue that exceeded drug-related costs, which includes the costs of purchasing and dispensing drugs. Of those remaining, 10 did not generate enough revenue to exceed drug-related costs, and 6 did not report enough information for us to determine the extent to which revenue was generated. Several factors affected 340B revenue generation, including drug reimbursement rates. Regardless of the amount of revenue generated, all covered entities reported using the program in ways consistent with its purpose. For example, all covered entities reported that program participation allowed them to maintain services and lower medication costs for patients. Entities generating 340B program revenue that exceeded drug-related costs were also able to serve more patients and to provide additional services. According to the 61 340B program stakeholders we interviewed, manufacturers' distribution of drugs at 340B prices generally did not affect providers' access to drugs. Specifically, 36 stakeholders, including those representing manufacturers, covered entities, and non-340B providers, did not report any effect on covered entities' or non-340B providers' access. The remaining 25, also representing a wide range of perspectives on the 340B program, reported that it affected access primarily in two situations: (1) for intravenous immune globulin (IVIG), a lifesaving drug in inherently limited supply; and (2) when there was a significant drop in the 340B price for a drug resulting in increased 340B demand. In both situations, manufacturers may restrict distribution of drugs at 340B prices because of actual or anticipated shortages. Stakeholders reported that restricted distribution of IVIG resulted in 340B hospitals having to purchase some IVIG at higher, non-340B prices. They also reported that restricted distribution when the 340B price of a drug dropped significantly helped maintain equitable access for all providers. HRSA's oversight of the 340B program is inadequate to provide reasonable assurance that covered entities and drug manufacturers are in compliance with program requirements--such as, entities' transfer of drugs purchased at 340B prices only to eligible patients, and manufacturers' sale of drugs to covered entities at or below the 340B price. HRSA primarily relies on participant self-policing to ensure program compliance. However, its guidance on program requirements often lacks the necessary level of specificity to provide clear direction, making participants' ability to self-police difficult and raising concerns that the guidance may be interpreted in ways inconsistent with the agency's intent. Other than relying on self-policing, HRSA engages in few activities to oversee the 340B program. For example, the agency does not periodically confirm eligibility for all covered entity types, and has never conducted an audit to determine whether program violations have occurred. Moreover, the 340B program has increasingly been used in settings, such as hospitals, where the risk of improper purchase of 340B drugs is greater, in part because they serve both 340B and non-340B eligible patients. This further heightens concerns about HRSA's current approach to oversight. With the number of hospitals in the 340B program increasing significantly in recent years--from 591 in 2005 to 1,673 in 2011--and nearly a third of all hospitals in the U.S. currently participating, some stakeholders, such as drug manufacturers, have questioned whether all of these hospitals are in need of a discount drug program. To ensure appropriate use of the 340B program, GAO recommends that HRSA take steps to strengthen oversight regarding program participation and compliance with program requirements. HHS agreed with our recommendations.
To enable DOD to close unneeded bases and realign others, Congress enacted BRAC legislation that instituted base closure rounds in 1988, 1991, 1993, and 1995. For the 1991, 1993, and 1995 rounds, special BRAC Commissions were established to recommend specific base realignments and closures to the President, who in turn sent the Commissions’ recommendations and his approval to Congress. A special Commission established for the 1988 round made recommendations to the Senate and House Committees on Armed Services. The four commissions generated 499 recommendations—97 major closures and hundreds of smaller base realignments and closures. For the 1988 round, the legislation required DOD to complete its realignment and closure actions by September 30, 1995. For the 1991, 1993, and 1995 rounds, the 1990 act required DOD to complete all closures and realignments within 6 years from the date the President forwarded the recommended actions to Congress. However, property disposal and environmental cleanup actions may continue beyond the 6-year period. The economic impact on communities near base realignments and closures has been a long-standing source of public anxiety. Because of this concern, DOD included economic impact as one of eight criteria that it used for making BRAC recommendations in the last three rounds. While economic impact did not play as large a role in initial BRAC deliberations as did other criteria and was not a key decision factor, its importance was such that DOD components were required to estimate the economic impact of their recommendations. Generally, BRAC property no longer needed by DOD is offered first to other federal agencies. Any property remaining is then disposed of through a variety of means that initially include transfers to states and local governments for public benefit purposes and thereafter is disposed of by negotiated or public sales. Under public benefit conveyances, local redevelopment authorities can obtain property for such purposes as schools, parks, and airports for no or little cost. In 1993, the BRAC act was amended to provide local redevelopment authorities with BRAC property by sale or lease at or below fair market value or without cost for rural communities to promote the economic recovery of areas affected by closures. Later, these provisions were replaced with others that also allowed the transfer of real property at no cost to local redevelopment authorities for job generation purposes or for lease back to the federal government. Consequently, local redevelopment authorities usually first sought to obtain property at no cost since, failing that, property could still be obtained through negotiated sales. Figure 1 shows the general process used to screen real property under BRAC. Many BRAC properties require environmental cleanup. The 1990 BRAC act requires compliance with a provision of the Comprehensive Environmental Response, Compensation, and Liability Act of 1980, as amended, in transferring contaminated federal property. Under this provision, DOD has a continuing responsibility for cleanup but may, by way of so-called “early transfers,” transfer BRAC property before all cleanups on the property have been completed. Under the early transfer process, either the receiving communities or DOD perform environmental cleanup. In both cases, DOD funds the costs of cleanup. While the loss of jobs for DOD civilians and other adverse effects are in the short term inescapable byproducts of base closures, such effects can continue for some time. However, our prior studies and the studies of others indicate that over time many communities have absorbed the economic losses. Several factors affect the economic recovery of communities near base realignments and closures. Local officials have cited the strong national or regional economy as one explanation of why their communities have avoided economic harm and found new areas for growth. In addition, federal programs are available to assist communities in adjusting to base closures. Economic data related to unemployment rates and average annual real per capita income growth suggest that the majority of communities surrounding closed bases are faring well economically in relation to the U.S. rates and show some improvement since base realignments and closures began with the 1988 BRAC round. In addition, while two communities we recently revisited have progressed in recovering economically, they still face problems. Figure 2 shows several factors that play a role in determining the fate of communities affected by base realignments and closures. Officials from BRAC communities have stressed the importance of having a strong national economy and local industries that could soften the impact of job losses from a base closure. Following the 1991 recession until the recent slowdown, the economic performance of the United States has been robust. In a January 1998 report, we examined defense-related spending trends in New Mexico and the relationship between those trends and New Mexico’s economy. We reported that while defense-related spending had declined in the state, the state’s gross product and total per capita income had increased and that this economic growth might be due to efforts to diversify the economy to counter the loss of defense jobs. Officials also pointed to regional economic trends at the time of a closure, during the transition period, and at the present. For example, officials from the communities surrounding Fort Devens, Massachusetts, said that at the time of the closure, the area was suffering from the downsizing and restructuring of the computer industry. Those same communities are now benefiting from the economic growth in the larger Boston metropolitan area. Beeville, Texas, where Chase Field Naval Air Station closed, has a long history of farming and ranching but has recently benefited from an expanding state prison industry. An area’s natural resources also can help economic recovery. In Blytheville, Arkansas, for example, where Eaker Air Force Base closed, the steel industry found a foothold in the late 1980s before the announcement of the base closure and has been a growing presence ever since. The Blytheville area is attractive to the steel companies because of its access to the Mississippi River and a major interstate as well as an available labor pool. Officials from communities surrounding closed bases said that publicizing redevelopment goals and efforts for former bases is key for attracting industry and helping the community gain confidence. Leadership and teamwork among participants at the federal, state, and local levels are essential to reaching agreement on key issues such as property transfer, base reuse, and environmental cleanup. To help communities to successfully transform closing bases into new opportunities, federal agencies have provided over $1.2 billion in direct financial assistance to areas affected by base closures. This assistance was in numerous forms— planning assistance to help communities determine how they could best develop the property, training grants to provide the workforce with new skills, and grants to improve the infrastructure on bases. Finally, the redevelopment of base property is widely viewed as a key component of economic recovery for communities experiencing economic dislocation due to jobs lost from a base closure. The closure of a base makes buildings and land available for use that can generate new economic activity in the local community. Our analysis of selected indicators shows that the economies of many BRAC-affected communities compare favorably to the overall U.S. economy. We used unemployment rates and real per capita income growth rates as broad indicators of the economic health of those communities where base closures occurred during the BRAC rounds. We identified 62 communities surrounding base realignments and closures from all four BRAC rounds for which government and contractor civilian job losses for each were estimated to be 300 or more. Our analysis of calendar year 2000 unemployment rates indicates that the rates for 62 BRAC-affected communities compare favorably with the U.S. rate. Forty-three (or 69 percent) of the 62 communities affected by the recent base closures had unemployment rates at or below the U.S. rate of 4 percent (see fig. 3). Attachment II compares the 2000 unemployment rate for each of the BRAC-affected locations, grouped by east and west of the Mississippi River for ease of presentation, to the U.S. rate. The unemployment situation is about the same as we reported in 1998. At that time, 42 (68 percent) of the 62 communities had unemployment rates at or below the then U.S. rate of 5.1 percent. For example, the 2000 unemployment rate for the Salinas area surrounding the former Fort Ord, California, dropped to 9.7 percent from 10.3 percent in 1997. Similarly, the rate for the communities near the former Naval Station and Shipyard, Charleston, South Carolina, decreased to 3 percent from 4 percent in 1997. For all BRAC-affected communities we examined with a higher average 2000 unemployment rate, only two—the Merced area surrounding the former Castle Air Force Base, California, and the Blytheville area surrounding the former Eaker Air Force Base, Arkansas—have had double-digit unemployment rates: 14.1 percent and 10.1 percent, respectively. The Merced area also had double-digit unemployment when we reported on this issue in December 1998. Local officials told us that these locations have historically had high unemployment rates, partly because of the large seasonal employment associated with the local agriculture. In a 1996 RAND National Defense Research Institute report on the effects of military base closures on three local communities, RAND concluded that “while some of the communities did indeed suffer, the effects were not catastrophic (and) not nearly as severe as forecasted.” RAND’s analysis showed that the burden of defense cutbacks such as base closures tended to fall more on individuals and companies rather than on the community. For example, a base with a large civilian employment might displace many workers, but the overall employment rate of the community might remain relatively stable. Finally, RAND demonstrated that economies of all types of communities can also be affected by longer term patterns of population and economic growth; the redirection of military retirees’ retail and medical expenditures from the base to the local community; and the withdrawal of working spouses from the local labor market, which frees up jobs for other local citizens. In a 2000 Massachusetts Institute of Technology report for the Department of Commerce, the Institute noted that military-base employment losses did not necessarily translate into employment losses in counties where bases were closed. In its analysis of 51 counties containing 52 closed bases, 21 counties (or 41 percent) in 1997 had greater post-closure job growth rates relative to the national average, and in 6 of those counties the job growth was more than twice the national average. In the remaining 30 counties, job growth was lower than the national average, of which 7 counties had job losses. The Institute concluded that redevelopment of closed bases will take at least 20 years or more and that time is needed to identify promising companies, persuade them to locate on the closed base, find a suitable site, negotiate an acceptable lease or sale, recruit qualified workers, and find jobs that match worker skills and expectations. As with unemployment rates, our analysis indicates that average annual real per capita income growth rates for 62 BRAC-affected communities compare favorably with the U.S. average rate. During 1996-99, 33 communities (or 53 percent) had average annual per capita income growth rates that were at or above the U.S. average rate of 3.03 percent (see fig. 4). Another seven communities (or 11 percent) had average annual per capita income growth rates that were in close proximity to the U.S. average rate of 3.03. Attachment III compares the 1996-99 average annual real per capita income growth rate for each of the BRAC-affected locations, grouped by east and west of the Mississippi River for ease of presentation, to the U.S. average rate. During the same period, the rate for communities near the former Fort Ord, California, increased 6.4 percent from the $27,620 rate in 1997 to $29,393. In addition, the rate for communities near the former Naval Station and Shipyard, Charleston, South Carolina, increased 9 percent from the $21,092 rate in 1997 to $22,944. Currently, all of the 29 communities below the U.S. average rate had positive average annual per capita income growth rates. In an analysis of 51 counties containing 52 closed bases, the Massachusetts Institute of Technology reported that 31 counties (or 61 percent) had per capita income greater in 1997 relative to the national rate than it was at the time of the BRAC closing announcement. However, the counties containing the four closed naval shipyards—Mare Island and Long Beach Naval Shipyards, California; Philadelphia Naval Shipyard, Pennsylvania; and Charleston Naval Shipyard, South Carolina—did not fare well. In addition, 10 of the 20 counties that lost income relative to the national rate were in California and most of the other counties that lost income were rural, such as Aroostook County, Maine; Clinton County, New York; Bee County, Texas; and Tooele County, Utah. In our 1998 report, we augmented our use of broad economic indicators with visits to selected communities to learn firsthand how they had fared economically after base closures. We reported that in general, the communities surrounding the six major base closure sites we visited suffered initial economic disruption, including decreased retail sales; declining residential real estate values; and social losses felt in local schools, churches, and organizations. However, we also reported that these initial losses were followed by recovery. We are currently updating this information and plan to visit several of the communities we visited previously and additional communities to obtain more in-depth information on their economic recovery. We recently revisited communities surrounding two of the major base closures—Beeville, Texas, near the former Chase Field Naval Air Station, and Merced and Atwater, California, near the former Castle Air Force Base—that we reported on in 1998. As attachment IV discusses in more detail, we found that each community has continued its economic recovery from the base closures, but some problems still exist. As of August 20, 2001, DOD reported that it has essentially implemented all of the BRAC Commissions’ 451 recommendations. Despite timely completion of actions on the recommendations, transfer of unneeded base property is only partially complete. DOD has decided how to dispose of about 99 percent of the 518,300 acres that the military services and components reported they do not need. DOD data as of June 2001 indicate that 229,800 acres (or 44 percent) will be retained by the federal government, 285,900 acres (or 55 percent) of the unneeded BRAC property will be transferred to nonfederal entities, and the disposition of 2,600 acres (less than 1 percent) has not yet been determined. About 206,800 acres (or 90 percent) of the federally retained property are being transferred to the Departments of the Interior and Justice for uses such as wildlife habitats and detention centers. DOD intends to retain about 14,500 acres (or 6 percent) for, among other things, administrative space for the Defense Finance and Accounting Service. DOD is actually retaining more property than this because, in many cases, during the BRAC process the property of an active military base was turned over to a reserve component without being declared excess. In our 1998 report, we noted that DOD data indicated that over 330,000 acres of BRAC property were being retained for use by the reserve components. While DOD has plans to transfer most of its unneeded property, fewer actual transfers than planned have taken place. In our December 1998 report, we noted that progress in transferring the title of BRAC properties to users had been affected by many factors. These factors included the iterative process of preparing site-specific reuse plans, preparing conveyance documentation, and environmental cleanups. As of June 2001, DOD data indicate that title to 212,400 acres (or 41 percent) of the 518,300 acres of unneeded property had been transferred to federal and nonfederal entities. Specifically, title to about 106,600 acres had been transferred to federal agencies and title to about 105,800 acres had been transferred to nonfederal entities. According to DOD officials, the transfer of the remainder of the property for federal agencies and nonfederal entities will be completed by 2007 and 2029, respectively. As discussed previously, the disposition of 2,600 acres has not yet been determined. While awaiting property transfers, communities and others can sometimes begin using base property through leasing. Of the 305,900 acres for which title has not been transferred, about 48,200 acres (or 16 percent) have been leased. According to community representatives, leasing is a useful interim measure to promote reuse and job creation. As noted earlier, Congress authorized the transfer of property prior to the completion of environmental cleanup, but the authority has been used in a limited number of instances and its implementation is still evolving. Program officials believe this approach is a powerful tool to help local communities obtain early ownership and control of property, thereby allowing for earlier reuse than otherwise possible. At the end of fiscal year 2000, DOD had transferred 10 properties at 8 BRAC-affected installations using the early transfer authority. The properties range from 12 acres to about 1,800 acres. In most of the transfers, DOD has continued the cleanup activities, but in some cases the new property owner is cleaning up the property. The advantage to the recipient in performing the cleanup is the ability to integrate cleanup and redevelopment activities, thus saving time and costs and gaining greater control for both activities. While DOD has made progress and has established numerous initiatives to expedite environmental cleanups, many cleanup activities remain. As of September 30, 2000, 99 of 204 BRAC installations requiring cleanup had cleanups under way or completed. DOD estimates that 80 additional installations will have cleanups under way or completed by fiscal year 2003, and the remaining 25 installations will have cleanups under way or completed during fiscal years 2004 through 2015. However, DOD projects that long-term monitoring will be required at some sites well after 2015 to ensure those cleanup actions are effective. Several factors have affected the progress of DOD’s environmental cleanup activities. According to DOD officials, changes in the anticipated use of an installation have occasionally created stricter cleanup requirements that have increased the cost and time needed to put remedies in place. For example, a site on Fort Ord, California, which was originally planned to have limited reuse, is now slated to become a residential area, necessitating more extensive environmental and unexploded ordnance inspection and cleanup. DOD also continues to complete investigations and conduct long-term monitoring at contaminated sites, which can reveal additional previously unknown contamination. For example, at a site on McClellan Air Force Base, California, the Air Force discovered traces of plutonium mixed in with radium-contaminated rags and brushes. The intensive procedures needed to deal with plutonium have increased the estimated cost from less than $10 million to $54 million and extended scheduled completion to 2034. Of the $22 billion estimated cost for implementing the BRAC program through fiscal year 2001, about $7 billion, or 32 percent, is associated with base closure environmental activities. Furthermore, DOD estimates that $3.4 billion will be required after fiscal year 2001 for environmental activities (see fig. 5). This is a $1 billion increase over the $2.4 billion environmental cost estimate DOD reported in fiscal year 1999. DOD officials attributed this increase primarily to the inclusion of cleanup costs for unexploded ordnance, delays in the program, the refinement of cleanup requirements and DOD’s cost estimates, and the use of more stringent cleanup standards due to changes in how closed installations will be used. As noted in our July 2001 report, DOD has reported that the vast majority of its BRAC environmental cleanup costs would have been incurred whether or not an installation is impacted by BRAC. DOD acknowledges, however, that environmental costs under the BRAC process may have accelerated in the shorter term. Others suggest that in some instances BRAC-related environmental cleanups may be done more stringently than would have been the case had the installation remained open. However, the marginal difference is not easily quantified and depends largely on the final use of the closed installation. The Air Force’s base closure environmental activities account for 52 percent of the total estimated costs after fiscal year 2001. About $417 million of the Air Force’s approximated costs of $1.8 billion is for the cleanup of the former McClellan Air Force Base. Navy officials indicated that they were revising the $808 million cost estimate for base closure environmental activities and believe that the estimate could increase by $142 million. Continuing negotiations with federal and state regulators is the major cost driver, as regulators have requested the Navy to apply more stringent standards for cleanups than originally planned. For example, during the closure of Dallas Naval Air Station, Texas, the state and local regulators asked the Navy to clean former industrial sites to residential levels, which required more extensive cleanup and increased cost. Army officials are also revising their $796 million cost estimate for base closure environmental activities due to better estimates for restoration of land with unexploded ordnance. They estimate that removal of unexploded ordnance may account for $308 million of the Army’s revised estimate, of which $254 million is estimated to remove unexploded ordnance from two locations—the former Fort Ord, California, and the former Camp Bonneville, Washington. Still, Army officials said that their cost estimates for base closure environmental activities beyond fiscal year 2001 could change based on the proposed land use. For example, the Army estimates that it will cost about $77 million to remove unexploded ordnance from the former Camp Bonneville so that it can be used as a park. However, officials said that if two-thirds of the land, which is heavily wooded, became a conservation area with institutional controls that limit public access, cleanup costs could be reduced significantly. DOD has implemented a Fast-Track Cleanup Program to speed the recovery of communities affected by the BRAC program. A key element of the cleanup program is the cooperative relationship between state and federal regulators and the installation environmental program manager. This team approach is intended to reduce the time to establish and execute cleanup plans. The program also seeks better integration of cleanup efforts with the community’s plan for using the properties, and it may also help to contain some environmental cleanup costs. The Congressional Budget Office reported in 1996 that DOD could reduce costs by delaying expensive cleanup projects if contamination poses no imminent threat and it lacks cost-effective cleanup technologies. The Office also stated that in the long run, new cleanup technologies represented the best hope of addressing environmental problems with available DOD funds. We have also reported that there are various options for reducing these costs. In 1996, we noted that cleanup costs at closing bases could be reduced by deferring or extending certain cleanup actions, adopting more cost-effective cleanup technologies, and sharing costs with the ultimate user of the property. We also reported that these options might adversely affect programmatic goals, thereby presenting decisionmakers with difficult choices in developing a cost-effective environmental program. - - - - - This concludes my statement. I would be pleased to answer any questions you or other members of the Subcommittee may have at this time. For further contacts regarding this statement, please contact Barry W. Holman at (202) 512-8412 or Mark Little at (202) 512-4673. Individuals making key contributions to this statement include Michael Kennedy, James Reifsnyder, Charles Perdue, Robert Poetta, Arnett Sanders, John Lee, Tom Mahalek, and John Buehler. Military Base Closures: DOD’s Updated Net Savings Estimate Remains Substantial (GAO-01-971, July 31, 2001). Environmental Liabilities: DOD Training Range Cleanup Cost Estimates Are Likely Understated (GAO-01-479, Apr. 11, 2001). Military Base Closures: Unexpended Funds Raise Questions About Fiscal Year 2001 Funding Needs (GAO/NSIAD-00-170, July 7, 2000). From Barracks to Business: The M.I.T. Report on Base Redevelopment, Economic Development Administration, Department of Commerce, March 2000. Military Base Closures: Potential to Offset Fiscal Year 2000 Budget Request (GAO/NSIAD-99-149, July 23, 1999). Military Bases: Status of Prior Base Realignment and Closure Rounds (GAO/NSIAD-99-36, Dec. 11, 1998). Military Bases: Review of DOD’s 1998 Report on Base Realignment and Closure (GAO/NSIAD-99-17, Nov. 13, 1998). Review of the Report of the Department of Defense on Base Realignment and Closure, Congressional Budget Office, July 1, 1998. Audit Report: Cost and Savings for 1993 Defense Base Realignments and Closures, Department of Defense Office of the Inspector General (Report No. 98-130, May 6, 1998). The Report of the Department of Defense on Base Realignment and Closure, Department of Defense, April 1998. Defense Infrastructure: Challenges Facing DOD in Implementing Reform Initiatives (GAO/T-NSIAD-98-115, Mar. 18, 1998). Base Realignment and Closure 1995 Savings Estimates, U.S. Army Audit Agency (Audit Report AA97-225, July 31, 1997). Military Bases: Lessons Learned From Prior Base Closure Rounds (GAO/NSIAD-97-151, July 25, 1997). The Effects of Military Base Closures on Local Communities: A Short- Term Perspective, RAND National Defense Institute, 1996. Military Base Closures: Reducing High Costs of Environmental Cleanup Requires Difficult Choices (GAO/NSIAD-96-172, Sept. 5, 1996). Military Bases: Closure and Realignment Savings Are Significant, but Not Easily Quantified (GAO/NSIAD-96-67, Apr. 8, 1996). As shown in figure 6, 16 (67 percent) of the 24 BRAC-affected local locations west of the Mississippi River had unemployment rates less than or equal to the U.S. rate of 4 percent in 2000. The other eight locations had unemployment rates greater than the U.S. rate. As shown in figure 7, 27 (or 71 percent) of the 38 BRAC-affected local locations east of the Mississippi River had unemployment rates less than or equal to the U.S. rate of 4 percent in 2000. The other 11 locations had unemployment rates greater than the U.S. rate. As shown in figure 8, 12 (or half) of the 24 BRAC-affected local locations west of the Mississippi River had average annual per capita income growth rates that were greater than the U.S. average growth rate of 3.03 percent during 1996-99. The other 12 locations had rates below the U.S. average rate. As shown in figure 9, 21 (or 55 percent) of the 38 BRAC-affected local locations east of the Mississippi River had average annual per capita income growth rates that were greater than or equal to the U.S. average growth rate of 3.03 percent during 1996-99. The other 17 locations had rates below the U.S. average rate. In 1998, we reported that in general, the communities surrounding the six major base closure sites we visited suffered initial economic disruption, including decreased retail sales; declining residential real estate values; and social losses felt in local schools, churches, and organizations.However, we also reported that this initial period was followed by recovery. We recently revisited communities surrounding two of the major base closures—Beeville, Texas (Chase Field Naval Air Station), and Merced and Atwater, California (Castle Air Force Base), and found that both have continued their economic recovery from the base closures but still have some problems. Table 1 shows how the closure of Chase Field Naval Air Station in February 1993 affected the surrounding communities and activities, as indicated by local officials during our visits in 1998 and 2001. In March 1998, DOD’s Office of Economic Adjustment reported that 1,290 new jobs had been created from the community’s reuse of the former naval air station. However, by October 2000, the reported number of jobs created dropped to 1,169. At the time of our 2001 visit, the former air station had only one tenant, who maintains the facility instead of paying rent under a negotiated 10-year lease agreement. According to local officials, the most important factor contributing to economic recovery was the decision of the Texas Department of Criminal Justice to locate a prison complex on the former air base. The medium- security prison, completed in 1994, occupies less than a third of the former base and employs about 1,200 people. Without this prison and another prison complex built earlier adjacent to the former base, local officials believe Beeville would not have survived as a community. Table 2 shows how the closure of Castle Air Force Base in September 1995 affected the surrounding communities and activities, as indicated by local officials during our visits in 1998 and 2001. DOD’s Office of Economic Adjustment reported an increase of 325 new jobs as a result of the redevelopment of Castle Air Force Base from 1998 to 2000. At the time of our 2001 visit, Cingular Wireless—the largest tenant on the former air base—employed 1,200 people at its call center. However, on July 25, 2001, Cingular announced that it was cutting 400 jobs at its Castle site because the number of calls and the size of the workforce had outgrown the center’s space. In addition, 42 other tenants on the former air base employed about 310 individuals.
This testimony reviews the progress of the Department of Defense's (DOD) base realignments and closures (BRAC) in 1988, 1991, 1993, and 1995 and the implementation of the BRAC Commissions' recommendations. Although some communities surrounding closed base areas are faring better than others, most are recovering from the initial economic impact of base closures. The short-term impact can be very traumatic for BRAC-affected communities, but the long-term economic recovery of communities depends on several factors, including the strength of the national and regional economies and successful redevelopment of base property. Key economic indicators show that the majority of communities surrounding closed bases are faring well economically in relation to U.S. unemployment rates and show some improvement since the time closures began in 1988. Implementation of BRAC recommendations is essentially completed, but title to only 41 percent of unneeded base property has been transferred. As of August 20, 2001, DOD reported that it has essentially implemented all of the BRAC Commission's 451 recommendations. Although DOD has made progress and established numerous initiatives to expedite cleanup, many cleanup activities remain. Cleaning up environmental contamination on BRAC-affected installations has proven to be costly and challenging for DOD and can delay the transfer of the title of property to other users. DOD expects to continue its environmental efforts well beyond fiscal year 2001, the final year of the base closure implementation authority.
The reserve forces are divided into three major categories, one of which is the Ready Reserve. The Ready Reserve, with approximately 1.2 million reservists at the end of fiscal year 2002, is further subdivided into the Selected Reserve, the Individual Ready Reserve, and the Inactive National Guard. The Selected Reserve, with approximately 880,000 members in fiscal year 2002, includes all personnel who are active members of reserve units and who participate in regularly scheduled drills and training. In this report, we refer to these personnel as “drilling unit members.” The Selected Reserve also includes individual mobilization augmentees— individuals who regularly train with active component units. The Individual Ready Reserve principally consists of individuals who have had training and have previously served in the active forces or in the Selected Reserve, and the Inactive National Guard contains individuals who are temporarily unable to participate in regular training with the Guard unit. Together, the Individual Ready Reserve and Inactive National Guard had about 320,000 members in fiscal year 2002. There are seven reserve components—the Army Reserve, the Army National Guard, the Air Force Reserve, the Air National Guard, the Naval Reserve, the Marine Corps Reserve, and the Coast Guard Reserve. Since the end of the Cold War, there has been a shift in the way reserve forces have been used. Previously, reservists were viewed primarily as an expansion force that would supplement active forces during a major war. Today, reservists not only supplement but also replace active forces in military operations worldwide. In fact, DOD has stated that no significant operation can be conducted without reserve involvement. Figure 1 shows per capita involvement of reservists annually since 1986 and illustrates the spikes in reserve participation in military operations in fiscal years 1991 (Desert Shield and Desert Storm) and 2002 (Noble Eagle and Enduring Freedom), as well as a fairly steady level of involvement between 1996 and 2001. We derived per capita calculations by dividing the total days of support for military missions by the end strength of the Selected Reserve. However, force structure within the Selected Reserve qualifies only a portion of those available to serve for a particular mission. Despite this, the data highlight trends in the average number of support days served by reservists. There have been wide differences in the operational tempos of individual reservists in certain units and occupations. Prior to the current mobilization, personnel in the fields of aviation, special forces, security, intelligence, psychological operations, and civil affairs had been in high demand, experiencing operational tempos that were two to seven times higher than those of the average reservist. Since September 2001, operational tempos have increased significantly for reservists in DOD reserve components due to the partial mobilization in effect to support Operations Iraqi Freedom, Noble Eagle, and Enduring Freedom. For each year from fiscal year 1997 to 2002, the reserves on the whole achieved at least 99 percent of their authorized end strength. In 4 of these 6 years, they met at least 100 percent of their enlistment goals. During this time period, enlistment rates fluctuated from component to component. For the fiscal year 1997-2002 period, only the Army National Guard experienced a slight overall increase in attrition. The attrition data suggest there has not been a consistent relationship between a component’s average attrition rate for a given year and the attrition rate for that component’s high demand capabilities (which include units and occupations). In other words, attrition rates for high demand capabilities were higher than average in some cases but lower than average in other cases. Shortfalls have been identified in certain specialties, such as health care professionals. DOD uses surveys of reservists and their spouses to obtain information on reservists’ income change when they are activated for a military operation and to obtain their perspective on a number of issues relating to activation, including family support and health care. The most recent survey of reservists was completed in 2000, prior to the terrorist attacks that occurred on September 11, 2001, and the ensuing mobilization. The 2000 survey included questions on various aspects of mobilization and deployment for operations dating back to the 1991 Persian Gulf War. In 2002, the Office of the Assistant Secretary of Defense for Reserve Affairs surveyed spouses of activated reservists. In 2003, DOD fielded a new “status of forces” survey of activated reservists. However, the survey had not been completed at the time we were conducting our work. The Under Secretary of Defense for Personnel and Readiness has oversight for policies, plans, and programs for military personnel management, military compensation, and personnel support programs. The Assistant Secretary of Defense for Reserve Affairs is responsible for the overall supervision of issues involving reserve forces. The Assistant Secretary of Defense for Health Affairs has the responsibility to execute DOD’s health care mission. The TRICARE Management Activity manages and executes the Defense Health Program Appropriation and supports the uniformed services in implementation of TRICARE. Regarding family support programs, the secretaries of the military departments are responsible for, among other things, ensuring that comprehensive family support systems are developed at DOD installations and that family support systems are monitored and evaluated to ensure that they are accessible, effective, and responsive to the needs of DOD personnel and their families. We have previously reported on several issues surrounding the increased use of reserve forces. In August 2003, we reported on the efficiency of DOD’s process for mobilizing reservists following September 11, 2001. In April 2003, we examined whether the Army was collecting and maintaining information on the health of early-deploying reservists. In March 2003, we testified before the Subcommittee on Total Force, House Committee on Armed Services, on our preliminary observations related to the issues covered in this report, as well as employer support. Also in March 2003, we testified before the Subcommittee on Total Force concerning DOD’s oversight of TRICARE’s network of civilian providers, and we issued a report on this topic in July 2003. In September 2002, we issued a report in response to a congressional mandate to study the health care benefits of reserve component members and dependents and the effect mobilization may have on these benefits. In June 2002, we noted that maintaining employers’ continued support for their reservist employees will be critical if DOD is to retain experienced reservists in these times of longer and more frequent deployments. DOD lacks sufficient information from the survey data to determine the magnitude of income loss or gain experienced by reservists, the causes for this change, and the effects of income loss on reserve retention. Such data are critical for assessing the full nature and scope of income change problems and in developing cost-effective solutions. DOD self-reported survey data from past and current military operations indicate that reservists have experienced widely varying degrees of income change when they are activated. While many reservists have reported lost income during activation, more than half of reservists have reported either no change or a gain in income. Current pay policies and protections, as well as emergency aid services, may help mitigate reservists’ financial hardship during activation. Additional income protection initiatives for reservists have been proposed. Three of these proposals are (1) an Army initiative to provide special deployment pay to self-employed physicians who fill critical medical wartime positions in the reserves, (2) legislative proposals to authorize differential pay for federal employee reservists, and (3) an Air Force initiative to establish an income insurance program for activated reservists. The 2000 and 2002 DOD surveys provide incomplete information on the magnitude of income change experienced by activated reservists, the causes of income loss, and the effects of income loss on reservists’ attitudes toward military life and on retention. Such data are critical for assessing the full nature and scope of income change problems and in developing cost-effective solutions. Based on the 2000 survey data, DOD estimated that the average total income change for all reservists (including losses and gains) was almost $1,700 in losses. However, this figure should be considered with caution because of the estimating methodology that was used and because it is unclear what survey respondents considered as income loss or gain in answering this question. For example, when members reported income loss or gain, they may or may not have included the value of indirect compensation, such as health care benefits, or considered changes in their expenses, such as those for household and car maintenance and for child care. In addition, it is unclear whether survey respondents included paid civilian leave received concurrently with military pay or whether they included differential pay if provided by their employer. Further, reservists were mobilized or deployed for varying lengths of time, which is likely to affect their overall income change. According to DOD’s analysis of the survey data, certain groups reported greater losses of income on average. For example, self-employed reservists reported an average income loss of $6,500, physicians/registered nurses reported an average income loss of $9,000, and self-employed physicians/registered nurses reported an average income loss of $25,600. DOD’s analysis presents little data on those groups who reported an overall income gain. Two groups that were identified as reporting a gain were clergy and those who worked for a family business without pay. The existing survey data provide incomplete information on the causes of income change. Income change can be attributed to various factors, including a difference between civilian and military pay, a change in spousal income, a change resulting from continuing losses from a business or practice, a different job being performed, or some combination of these. The 2002 spouse survey estimates showed that about 60 percent had an increase in the military member’s earnings, 10 percent had an increase in their own earnings due to working more hours or taking a second job, 31 percent had a reduction in the military member’s earnings, 19 percent had a reduction in their own earnings because they were unable to work as much, 6 percent had other increases in income, and 15 percent had other reductions in income. In addition to these factors, military households may also experience a change in expenses during the activation period. The 2000 survey estimates showed that about 22 percent of drilling unit members had an increase in child-care expenses, 26 percent had an increase in household maintenance and car repair expenses, and 63 percent had an increase in telephone expenses. However, neither survey provides complete information on the extent these individual factors contribute to overall income change. Although reservists have reported that income loss causes problems for them, the effects of these problems are not clear. When asked to rank income loss among other problems they have experienced during mobilization or deployment, about 41 percent of drilling unit members ranked it as one of their most serious problems. But the survey data are inconclusive concerning the effects of income loss problems on servicemembers’ attitudes toward military life or on retention. Our prior work has shown that retention decisions are highly personal in nature and that many factors may affect the decision of a servicemember to stay in the military or leave. A 1998 RAND study conducted for DOD found that income loss during the 1991 Persian Gulf War, while widespread among reservists, did not have a measurable effect on the retention of enlisted reservists. The study was cautiously optimistic that mobilizing the reserves under similar circumstances in the future would not have adverse effects on enlisted recruiting and retention. However, the effects of future mobilizations can depend on the mission, the length of time reservists are deployed, the frequency of deployment, the degree of support from employers and family members, and other factors. Office of the Secretary of Defense (OSD) officials told us it was too early to know how the current mobilization would affect retention or what factors would be driving reservists’ retention decisions. The 2000 DOD survey showed that an estimated 41 percent of reservists who were drilling unit members in the Selected Reserve lost family income when they were mobilized or deployed for a military operation, 30 percent had no change in income, and 29 percent had an increase in income. Table 1 shows the distribution of income change reported by drilling unit members in the 2000 survey. Our analysis of the 2000 DOD survey estimates showed that differences in total family income change were attributable to different civilian occupations. For example, a higher percentage of self-employed reservists lost income (55 percent) compared with drilling unit members overall (41 percent). About 10 percent of self-employed drilling unit members had income loss of $25,000 or more, compared with about 3 percent for drilling unit members overall. The percentage of federal employee reservists who lost income did not differ statistically from the overall average for drilling unit members. Of federal employee reservists, about 39 percent had an income loss, and 62 percent had no change or a gain in income. Of reservists in selected civilian career fields, a higher percentage of health care professionals had income loss compared with reservists in other career fields, and about 38 percent of health care professionals had an income loss of $25,000 or more. Income change differences were also evident based on reserve component and pay grade. For example, a higher percentage of members in the Marine Corps Reserve and the Naval Reserve had income loss compared with members of the Army National Guard. The 2002 DOD survey of spouses of activated reservists found that an estimated 58 percent had an increase in monthly family income, 30 percent had a loss in monthly income, and 12 percent experienced no change in monthly income (see table 2). Current pay policies and protections, as well as emergency aid services, may help mitigate reservists’ financial hardship during activation. For example, basic military compensation has increased in recent years. In addition, the Soldiers’ and Sailors’ Civil Relief Act provides numerous financial protections to reservists. (See app. III for more information on existing pay policies and protections.) Additional income protection measures for reservists have been proposed. Three proposals are (1) an Army initiative to provide special deployment pay to self-employed physicians who fill critical medical wartime positions in the reserves, (2) legislative proposals to authorize differential pay for federal employee reservists, and (3) an Air Force initiative to establish income insurance for activated reservists. The Army, through DOD’s Unified Legislation and Budgeting process, has proposed a special deployment pay to limit income loss and improve retention of certain Army Reserve Medical Corps physicians. The pay would be targeted at reservists called to active duty who are (1) self-employed, (2) serve as officers in the Army Reserve Medical Corps in critical wartime medical specialties, and (3) deploy involuntarily beyond the established rotation. The special deployment pay would be available during contingencies and funded through a supplemental appropriation. Under this proposal, an eligible reservist would receive an additional monthly pay that would vary by specialty, level of training, and years of active duty service as a Medical Corps officer. The monthly pay would be limited to no more than twice the special pay currently earned by an eligible individual. The Army estimates the mean cost at $6,000 per month per eligible professional. The Army estimates that had this pay policy been in place in May 2003, Army Reserve physicians deployed beyond the established rotation period (90 days) would have received a total of $630,000 in special deployment pay for that month. According to the Army, this special pay is needed because of difficulties retaining and replacing fully trained physicians in the Army Reserve Medical Corps to meet its wartime needs. These retention difficulties are due, in part, to reservists’ concerns about financial loss during deployment. According to the Army, it has been unsuccessful in recruiting and retaining enough fully trained physicians to meet authorized personnel levels in the Selected Reserve and has had to rely on transfers from the Individual Ready Reserve to reconstitute its Selected Reserve strength. The Army attributes retention challenges within the reserves to a decrease in the number of active duty physicians transferring to the reserve component, attrition due to an aging force and professionals meeting retirement eligibility, and the inability of some medical professionals to tolerate income loss resulting from frequent or lengthy activations. Every 2 years, the Assistant Secretary of Defense for Health Affairs publishes a list of critical officer skills needed to meet Ready Reserve medical shortages and for which the services could offer retention and recruiting incentives. During fiscal years 2002 and 2003, the Army Reserve was projected to have critical shortages—that is, projected to fill less than 80 percent of authorized positions during the next 24 months—in 18 wartime health care specialties, such as general surgery, thoracic surgery, and preventative medicine. For example, as of January 2003, the Army Reserve had filled 78 percent of authorized general surgeon positions, 62 percent of thoracic surgeon positions, and 41 percent of preventative medicine positions. A 1996 survey conducted for the Chief, Army Reserve, found that 54 percent of Army Reserve physicians cited the financial impact of mobilization as a primary reason that they did not intend to remain in the reserves until retirement. The survey showed that catastrophic financial loss associated with long-term deployments was the primary factor in their decisions to leave the Army Reserve. Furthermore, over three-quarters of all Army Reserve physicians surveyed in 1996 and 2001 required mobilization periods of 90 days or less to avoid seriously affecting their medical practices. Fifty-nine percent of respondents to the 2001 survey preferred a maximum deployment length of 60 days or less. However, these respondents indicated that a special deployment pay would allow them to deploy for longer periods of time and would increase the likelihood that they would remain in the Army Reserve. The amount of special pay that respondents would need varied by medical specialty, with the majority indicating a need for less than $10,000 a month to maintain their practices while deployed. To increase retention among medical professionals concerned about the financial impact of lengthy mobilizations on their practices, the Army implemented the Presidential Reserve Call-Up 90-Day Rotation Pilot Program in 1999. The 3-year pilot program limited deployments of physicians, dentists, and nurse anesthetists to 90 days in the area of operations. A 2001 survey of Army Reserve medical personnel found that intent to remain in the Army Reserve increased among self-employed medical professionals who were aware of the 90-day rotation pilot program. By 2001, the percentage of Army Reserve medical professionals who indicated that they did not intend to remain until retirement had dropped slightly. Those aware of the 90-day cap who indicated that they would leave because of the financial impact of mobilization decreased by 23 percent from 1996 to 2001. Moreover, those indicating that they would leave because of concerns about future mobilizations decreased by 20 percent over the same time period. Respondents to the 2001 survey indicated that a special deployment pay would allow them to deploy for longer periods of time. For example, of respondents whose original optimal deployment length was 31 to 60 days, 76 percent indicated that they could increase their deployment time up to 90 days with a special deployment pay. Federal employee reservists called to active military duty do not receive paid compensation from their civilian employing agency other than paid military and annual leave. Under various legislative proposals introduced during the current congressional cycle, federal agencies that employ reservists called to active duty would be required to pay the difference, if any, between the employee’s civilian pay and military pay from appropriated funds. DOD’s 2000 survey estimates indicated that 9 percent of drilling unit members were federal employees in 1999. Proponents of differential pay for federal employee reservists state that providing this pay (1) would recognize the demands and burdens placed on reservists and their families, (2) would help federal employee reservists maintain their standard of living, and (3) would set an example for other employers of reservists. The Office of Personnel Management has opposed similar legislation in the past on the basis of equity and cost issues. In addition, as noted earlier, available data indicate that federal employee reservists are not suffering income loss to a greater extent than other reservists, such as certain health care professionals. Federal law provides many rights and benefits for federal employees called to active military duty. In December 2001, federal agencies were granted discretionary authority to pay both the employee and government shares of the Federal Employees Health Benefits Program premium for any or all of an 18-month period when an employee is called to active duty in support of a contingency operation. As of March 2003, about 64 percent of federal agencies reported paying the entire premium when an employee is called to active duty. At agencies that have not used this discretionary authority, employees may continue to pay their share of premiums for the first 12 months and their share of premiums plus the government’s share of premiums and a 2 percent administrative processing fee for the next 6 months. Other benefits for activated federal employee reservists include the following: continuation of life insurance for up to 12 months at no cost; continued accrual of military leave, which may be carried over to the following fiscal year or used while activated; and retroactive retirement credits upon return to their civilian positions. In an August 2002 memorandum, the Office of Personnel Management cited “equity issues” in its opposition to differential pay for federal employee reservists. While the Office of Personnel Management did not elaborate, differential pay could create inequities in pay between federal employee reservists and their active duty counterparts who are serving in the same positions and pay grades. Two servicemembers performing the same military job could receive different amounts of compensation simply because one is a reservist with a full-time job in the federal civilian sector. In addition, there may not be a correlation between a reservist’s civilian and military pay grades. A federal employee’s civilian salary is based on work performed at a certain pay grade and may require different skills and knowledge than the employee’s military job. Providing differential pay would, in effect, be paying the reservist for a job other than that being performed. The Office of Personnel Management also stated that the cost of providing differential pay for activated federal employee reservists on an indefinite basis would be significant and that data are lacking to make an accurate cost projection. It noted that because federal agencies would fund the cost of differential pay, agencies with greater numbers of activated reservists would have higher costs, reducing the amount of funds available for other program operations. The Congressional Budget Office developed a cost estimate for one of the legislative proposals. It estimated that this proposal would cost $201 million for the fiscal year 2003-08 period, which includes retroactive payments for federal employees called to active duty since September 11, 2001. A factor that complicates calculation of the total cost of differential pay is DOD’s lack of complete information about reservists’ employment. Until recently, DOD did not require reservists to provide information to DOD about their civilian employers. In response to our recommendation that DOD collect complete data about reservists’ employment, DOD and the services implemented the Civilian Employment Information Program in March 2003. Under this program, Selected Reserve members are required to provide their employment status, employer’s name, and civilian job title, among other information. When fully established, the program will allow DOD to consider employment-related factors during premobilization planning and assist the department in accomplishing its employer outreach efforts. The data could also help to identify the number of federal employees who could be called to active duty and to develop a total cost estimate if a pay differential were offered to them. While civilian employers are not required under the Uniformed Services Employment and Reemployment Rights Act to provide differential pay to activated reservists, we have found that many employers do so. As part of our earlier work on employer support for the Guard and Reserve, we contacted 359 employers of reservists in high tempo units between November 2001 and March 2002 about their pay practices for activated employees. Of 183 employers who completed and returned the survey, about 40 percent indicated that they provide either full pay, differential pay, or a combination of both to activated reservists. For this report, we also surveyed officials from 22 states about their compensation policies for state employees called to federal active duty and found that most offer some type of financial assistance to their activated employees. Nineteen of the 22 states offer financial assistance, such as pay differentials, to employees who are on military leave without pay and can document a loss of income. (App. IV provides information on income assistance, military leave, health benefits, and other benefits offered by the 22 states to their employees who are called to active duty.) DOD’s 2002 spouse survey estimates showed that the reserve member’s civilian employer continued to pay the member’s salary in full or in part for 22 percent of spouses. From 1996 to 1997, DOD offered the Ready Reserve Mobilization Income Insurance Program to reservists as a way to protect their civilian income when called to active duty. The program was canceled after it failed financially. Through DOD’s Unified Legislation and Budgeting process, the Air Force has proposed that DOD establish a somewhat similar income insurance program that addresses some of the problems associated with the original program but not others. The original DOD program was initiated after concerns were raised following the 1991 Persian Gulf War that income loss would adversely affect retention of reservists. According to a 1991 DOD survey of reservists activated during the Gulf War, economic loss was widespread across all pay grades and military occupations. In response to congressional direction, DOD in 1996 established the Ready Reserve Mobilization Income Insurance Program, an optional, self-funded income insurance program for members of the Ready Reserve ordered involuntarily to active duty for more than 30 days. Reservists who elected to enroll could obtain monthly coverage ranging from $500 to $5,000 for up to 12 months within an 18-month period. Far fewer reservists than DOD expected enrolled in the program. Many of those who enrolled were activated for duty in Bosnia and, thus, entitled to almost immediate benefits from the program. The program was terminated in 1997 after going bankrupt. We reported in 1997 that private sector insurers were not interested in underwriting a reserve income mobilization insurance program due to concerns about actuarial soundness and unpredictability of the frequency, duration, and size of future call-ups. Certain coverage features violated many of the principles private sector insurers usually require to protect themselves from adverse selection. These features include voluntary coverage and full self-funding by those insured, the absence of rates that differentiated between participants based on their likelihood of mobilization, the ability to choose coverage that could result in full replacement of their lost income rather than those insured bearing some loss, and the ability to obtain immediate coverage shortly before an insured event occurs. DOD officials told us that private sector insurers remain unsupportive of a new reserve income insurance mobilization program and that the amount of federal financial commitment required for the program is prohibitive. Thus, DOD has no plans to implement a new mobilization insurance program. However, the Air Force has proposed that DOD establish a self-funded income insurance program for reserve component members ordered involuntarily to active duty for more than 30 days, or in support of forces activated during a war declared by Congress or a period of national emergency. The Air Force proposal attempts to address adverse selection, low participation rates, and funding concerns that contributed to the failure of the Ready Reserve Mobilization Income Insurance Program. For instance, to address adverse selection and low participation rates, all drilling unit members and individual mobilization augmentees would be automatically enrolled in the program for $1,000 of monthly coverage with the option to opt out. Individual Ready Reserve members would have the option to enroll. To further mitigate adverse selection and funding concerns, payments would not be made during the first 6 months of enrollment in the program, regardless of mobilization or recall status. This delay would allow funds to accrue for future payouts. Furthermore, DOD would be able to suspend the annual enrollment open season during national emergencies and periods of war that are declared by Congress. Mandatory waiting periods for coverage to become effective would help counter adverse selection that resulted when reservists with knowledge of their imminent mobilization enrolled in the Ready Reserve Mobilization Income Insurance Program. However, instituting waiting periods and requiring mandatory participation still would not overcome financial liability associated with large mobilizations. Even infrequent mobilizations could produce a large number of claims. As a result, funding for the program could be exhausted quickly. In 1998, the Congressional Research Service estimated that if every Selected Reservist were enrolled for coverage of $1,000 per month and paid premiums of $10 per month, the fund would accumulate $9 million in income each month and $702 million over 5 years, assuming that premiums were invested at a 10 percent compound interest rate. A mobilization of 250,000 reservists would create a monthly liability of $250 million, making the fund insolvent by the fourth month of mobilization. The Air Force proposal does not completely address some of the problems experienced with the prior program, including adverse selection, low participation rates, proof of loss of income, and funding concerns. As currently structured, the Air Force’s proposed income insurance program would not have graduated premiums that differentiate between participants based on their likelihood of mobilization. However, participants would be able to purchase additional coverage or opt out of the program depending on their perceived risk of activation. Similar to the Ready Reserve Mobilization Income Insurance Program, the Air Force’s proposed program is designed to be financed entirely by premiums paid by individual members. DOD would need to assume responsibility for any unfunded liability that may result from a larger than expected mobilization. As a result, the Secretary of Defense would need to submit a supplemental appropriation request. In addition, the Air Force’s proposed income insurance program does not require proof of loss of income. As designed, the program would pay benefits based on the amount of coverage chosen by the reservists regardless of actual losses incurred. Premium rates would be set for a specified amount of insurance coverage. There is no provision to prevent reservists from subscribing to amounts of coverage significantly greater than their actual loss of income. To minimize the program’s financial liability, reservists could be required to document income loss when submitting claims. However, verifying losses from self-owned businesses, lost commissions or bonuses, or additional expenses could be difficult and delay timely payment of benefits. Even if these design criteria were addressed, designing a financially sound program may not be possible. There is no reliable way to estimate the duration, number, and timing of future mobilizations and the number and specialties of reservists that would be called up. DOD’s increased reliance on the reserve components in a changing and unpredictable world situation makes projections of future call-ups exceedingly difficult. To be financially sound, an insurance program, at a minimum, should have a large eligible population of whom a large proportion are insured and the proportion of those insured who file claims should be reasonably stable over time. In addition, to be affordable, the majority of those insured will not, in any period, incur the losses that they insure against. Furthermore, it is unclear whether reservists want or need an income insurance program. Although the 2000 DOD survey indicated that an estimated 41 percent of drilling unit members had losses in family income when mobilized or deployed, it is unknown whether reservists would be willing to participate in a new income insurance program. For example, high premiums and a mandatory waiting period before becoming eligible for payouts could discourage participation. A survey conducted before the Ready Reserve Mobilization Income Insurance Program was implemented showed that about 70 percent of enlisted members and 55 percent of officers indicated interest in participating in such a program. The DOD Office of the Actuary estimated that about 40 percent of reservists would participate. However, only about 3 percent of Selected Reserve members actually enrolled. The enrollment pattern indicated that reservists in certain military specialties had a greater need or demand for income protection. Of the approximate 5,500 military specialties in the reserve components, about 1,930 (35 percent) had some reservists enrolled in the program, including 420 military specialties that had enrollment levels of 10 percent or more. Although these 420 military specialties accounted for less than 8 percent of the total military specialties within the reserve components, they made up over 25 percent of the total reservists enrolled in the program. Of the 420 specialties with enrollment levels of 10 percent or more, 250 were in aviation, legal, and medical fields. Although DOD has placed greater emphasis on family readiness, many reserve families indicate they do not feel prepared for call-ups. In addition, although reservists and their families are eligible for military family support services, many reservists appear not to be aware of these services, and most spouses of activated reservists have not used these services. DOD officials have acknowledged they face challenges in providing family support outreach to reservists and have taken steps to improve outreach. Personal financial management, one of DOD’s core family support programs, illustrates the continuing challenges DOD faces in providing outreach to reservists. DOD has not assessed the financial well-being of reserve families, nor has it assessed the impact of reservists’ financial problems on mission readiness. DOD has noted a need to improve reservists’ and their spouses’ awareness of and access to personal financial management programs, but it has not tailored its programs to reservists by developing plans that specify how these needs will be met. DOD has recognized the importance of family readiness and family support for its servicemembers, including reservists. Under a 1994 DOD policy, the military services must “ensure National Guard and Reserve members and their families are prepared and adequately served by their services’ family care systems and organizations for the contingencies and stresses incident to military service.” According to DOD, families of reservists who use family support services and who are provided information from the military cope better during activations. Furthermore, military members who are preoccupied with family issues during deployments may not perform well on the job, which in turn, may negatively affect the mission. According to DOD’s 2000 survey estimates, reservists who had been activated stated that among the most serious problems they experienced were burdens placed on their spouse and problems created for their children. More than half of all reservists are married, and about half have dependents. As of September 2001, there were about 960,000 family members of Selected Reserve members, including spouses, children, and adult dependents. Despite this recognition of the importance of family readiness and family support, many reserve families feel they are not prepared when the member is notified for active duty. According to DOD officials, Operations Noble Eagle and Enduring Freedom highlighted that not all reserve families were prepared. Since many families never thought their military member would be mobilized, they had not become involved in their family readiness networks. Results from the 2000 DOD survey also showed a substantial number of reservists did not anticipate call-up— about 35 percent of drilling unit members thought it was unlikely or very unlikely that they would be mobilized or deployed in the next 5 years. Furthermore, about one-fourth of drilling unit members said their dependent care arrangements were not realistically workable for deployments lasting longer than 30 days. DOD’s 2002 spouse survey showed that an estimated 33 percent of spouses felt they were unprepared or very unprepared when they first learned of the member’s order to active duty, while 37 percent felt they were very well prepared or well prepared and 30 percent felt they were neither prepared nor unprepared. The survey data indicated that less than half of spouses were involved in family readiness groups, attended readiness briefings, received preactivation materials, or had a military point of contact to help them deal with emergency issues that might arise. Our analysis showed that some of these factors appeared to be related to whether spouses felt prepared or unprepared when the member was notified for active duty, although involvement in family readiness groups and receiving preactivation materials upon the member’s notification to active duty did not appear to be significant factors. (See app. V.) Compared with unprepared spouses, a higher percentage of prepared spouses had a longer period of notice before the member was activated. As might be expected, a higher percentage of prepared spouses than unprepared spouses were coping well or very well during the activation. An estimated 84 percent of prepared spouses were coping well or very well, 3 percent were coping poorly or very poorly, and 13 percent were coping neither poorly nor well. Of unprepared spouses, 41 percent were coping well or very well, 31 percent were coping poorly or very poorly, and 28 percent were coping neither poorly nor well. Although activated reservists and their family members are eligible for the same family support services as their active duty counterparts, the DOD 2000 survey estimates showed that more than half of all reservists either believed that family support services were not available to them or did not know whether these services were available (see table 3). DOD has found that the degree to which reservists are aware of family support programs and benefits varies according to component, unit programs, command emphasis, reserve status, and willingness of the individual member to receive or seek out information. Among the key challenges in providing family support are the long distances that many reservists live from their home unit and military installations, the difficulty in persuading reservists to share information with their families, the unwillingness of some reservists and their families to take the responsibility to access available information, conflicting priorities during drill weekends that limit the time spent on family support, and a heavy reliance on volunteers to act as liaisons between families and units. Spouses of activated reservists have not made extensive use of military family support programs. DOD’s 2002 spouse survey indicated that most spouses did not use family programs during activation. When asked to rate the helpfulness of various military support services, an estimated 94 percent of spouses said they had not used family programs. In response to another survey question concerning the difficulty they experienced accessing military services, 87 percent said they had not used family programs. It is unclear from the survey data why spouses did not use family programs. About 1 percent of spouses rated family programs as their most important military support service. DOD has recognized the need for improved outreach. For example, the department has published benefit guides for reservists and family members and has enhanced information posted on its Web sites. DOD published a “Guide to Reserve Family Member Benefits” that informs family members about military benefits and entitlements and a family readiness “tool kit” to enhance communication about predeployment and mobilization information among commanders, servicemembers, family members, and family program managers. Each reserve component has established family program representatives to provide information and referral services, with volunteers at the unit level providing additional assistance. The U.S. Marine Corps began offering an employee assistance program in December 2002 to improve access to family support services for Marine Corps servicemembers and their families who reside far from installations. Through this program, servicemembers and their families can obtain information and referrals on a number of family issues, including parenting, preparing for and returning from deployment, basic tax planning, legal issues, and stress. The National Guard has established family assistance centers across the United States to act as an entry point for service and assistance that a family member may need during the current mobilization. As of May 2003, over 400 family assistance centers had been established. Personal financial management, one of DOD’s core family support programs, illustrates the continuing challenges DOD faces in providing outreach to reservists. These challenges include improving access to and awareness of personal financial management programs for reservists and their family members. Under DOD policy, military personnel bear primary responsibility for the welfare of their families, but the commitment demanded by military service requires that they be provided a comprehensive family support system, to include financial planning assistance. Servicemembers receive financial management training during their basic training and, in some cases, during advanced training. In addition, personal financial management is one of the core services offered at the military services’ family support centers. Personal financial management consists of programs conducted by counselors who provide personal and family financial training, counseling, and assistance. DOD studies have identified problems with personal financial management in the active duty force, particularly among junior enlisted members. A 2002 study found that (1) 20 percent of the junior enlisted force in the active component has financial problems; (2) these personnel have substantially more financial problems than does the comparable civilian population; and (3) financial problems are not related to family income, which suggests that financial problems are shaped by spending patterns and management skills rather than by the level of income. According to this study, “unit leaders consistently complained that much of their time was spent dealing directly with financially overextended members. These problems had a corrosive effect on the unit because they affected work performance through added stress on members as well as through absences to deal with creditors or get credit counseling.” A 2000 Navy study found that 57 percent of Navy leaders cited financial concerns as the main servicemember issue with which they dealt most often. Further, in response to a House Committee on Armed Services requirement in the Fiscal Year 2002 National Defense Authorization Act, the Navy identified $250 million in productivity and salary losses due to poor personal financial management. In 2002, as part of its human capital strategic plan, DOD identified a need to improve the financial literacy and responsibility of servicemembers, including reservists. The plan states that mission readiness and quality of life are dependent upon servicemembers’ using their financial resources responsibly and that the military services must make a commitment to educate servicemembers and their families and encourage them to use good financial sense. Financial literacy training and counseling is one of the pillars that support financial well-being. However, DOD has not developed plans to address these needs. DOD is reviewing a draft uniform personal financial management policy. Currently, DOD and service regulations address aspects of personal financial management. The draft policy seeks to establish a uniform approach to educating and training all servicemembers, including reservists. Regarding the reserves specifically, the draft policy would require the military departments to provide a financial planning package and instructional information to reservists as part of their mobilization training. In addition, the draft policy outlines metrics to track financial well-being, such as the number of delinquent government credit cards, the number of individuals who have had their wages garnished, the self- reported financial condition of military personnel and their families, and the number of administrative Uniform Code of Military Justice actions taken against military personnel for financial indebtedness and irresponsibility. In addition to drafting a personal financial management policy, DOD has taken steps to improve personal financial management programs. In May 2003, DOD formally launched a “financial readiness campaign” to address servicemembers’ poor financial habits and to increase financial management awareness, savings, and protection against predatory practices. It has also entered into a number of partnerships with nonprofit organizations and government agencies that have agreed to support counselors who offer financial assistance programs to servicemembers. The services have also made improvements. For example, the Navy has increased the number of mandatory hours of personal financial management training and uses mobile financial management teams to train financial specialists, including in geographically remote regions where there are no financial educators to provide training. The services also provide financial planning information on their Web sites. As shown in table 3, the 2000 DOD survey showed that an estimated 61 percent of drilling unit members did not know whether financial counseling and management education services were available, and 16 percent did not think these services were available. DOD’s 2002 spouse survey showed that about 76 percent of spouses did not use the military’s financial information and counseling services, although it is unclear why they did not. Although DOD has identified challenges in the service personal financial management programs, it has not developed plans to provide outreach to reservists and their spouses. A DOD official from the Office of the Deputy Under Secretary of Defense for Military Community and Family Policy said that little attention has been placed on extending personal financial management programs to the reserve population. In a 2002 report to Congress, DOD stated, “The services should improve access to personal financial management information by Reserve forces.” The DOD report also stated that most personal financial management training “does not adequately provide support to spouses.” The Army noted that “increasing spouse participation is not easy and requires significant marketing and leadership support.” The Air Force and the Marine Corps specifically identified the lack of spousal outreach as a gap in their programs. The services also recognize a need to improve marketing of financial management programs to reservists and their spouses. Two services—the Army and the Air Force—cited lack of resources, including dedicated personal financial management personnel, as a challenge to increasing access to and awareness of personal financial management programs. In addition, while DOD has assessed the financial well-being of the active duty force, it has not conducted such assessments of reservists. Our review of DOD survey data showed that reservists reported having many of the same financial problems as their active duty counterparts. For instance, about 20 percent of reservists and 19 percent of active duty personnel characterized their family’s financial condition as “in over your head” or “tough to make ends meet but keeping your head above water.” However, a higher percentage of reservists reported having such financial difficulties as bouncing checks, receiving a letter of indebtedness, and falling behind in paying rent or mortgage, than did their active duty counterparts. For example, 12 percent of reservists fell behind in paying rent or mortgage compared with 3 percent of active duty members. In addition, while DOD has found a link between financial problems and readiness in the active component, it has not assessed the impact of reservists’ financial problems on mission readiness. Reservists’ family members are eligible for TRICARE when reservists have been activated for 31 days or more, and a number of recent improvements have been made to reserve family health benefits. These improvements include earlier access to certain benefits, expanded options, higher reimbursement rates for nonnetwork physicians, and efforts to improve outreach. Reserve families may choose to use TRICARE when reservists are activated or remain under civilian health insurance coverage. Our prior work showed that despite having access to TRICARE, most reservists with civilian health insurance had opted to retain their civilian health care coverage for their families during periods of activation. To further expand reservists and their family members’ access to health care, Congress is considering legislation to offer military health care coverage to reservists and their families when members are not on active duty. However, DOD has not fully assessed the need for or ramifications of this proposal. For example, DOD does not know the impact this proposal would have on recruiting and retention, the effects on active duty personnel, the extent reservists and their families might participate in such a program, or the impact on the TRICARE system. Cost estimates range up to $5.1 billion a year. When activated for a contingency operation, reservists and their family members are eligible for health care benefits under TRICARE, DOD’s managed health care program. TRICARE offers beneficiaries three health care options: Prime, Standard, and Extra. TRICARE Prime is similar to a private HMO plan and does not require enrollment fees or copayments. TRICARE Standard, a fee-for-service program, and TRICARE Extra, a preferred provider option, require copayments and annual deductibles. None of these three options require reservists to pay a premium. Benefits under TRICARE are provided at more than 500 military treatment facilities worldwide, through a network of TRICARE-authorized civilian providers, or through nonnetwork physicians who will accept TRICARE reimbursement rates. Reservists who are activated for 30 days or less are entitled to receive medical care for injuries and illnesses incurred while on duty. In addition, Congress requires the Army to monitor the health status of those designated as early-deploying reservists by providing annual medical and dental screenings, selected dental treatment, and—for those over age 40— physical examinations every 2 years. Those under age 40 are required to undergo a physical examination once every 5 years. For its early-deploying reservists, the Army conducts and pays for physical and dental examinations and selected dental treatment at military treatment facilities or pays civilian physicians and dentists to provide these services. Reservists who are placed on active duty orders for 31 days or more are automatically enrolled in TRICARE Prime and receive most care at a military treatment facility. Family members of reservists who are activated for 31 days or more may obtain coverage under TRICARE Prime, Standard, or Extra. Family members who participate in Prime obtain care either at a military treatment facility or through a network provider. Under Standard or Extra, beneficiaries may use either a network provider or a nonnetwork physician who will accept TRICARE rates. Upon release from active duty that extended for at least 30 days, reservists and their family members are entitled to continue their TRICARE benefits for 60 days or 120 days, depending on the reservists’ cumulative active duty service time. Reservists and their dependents may also elect to purchase extended health care coverage for 3 months at a time for a maximum of up to either 18 months or 3 years under the Continued Health Care Benefit Program. Legislation passed in December 2002 (P.L. 107-314, sec. 702) made family members of reservists activated for more than 30 days eligible for TRICARE Prime if they reside more than 50 miles, or an hour’s driving time, from a military treatment facility. In March 2003, DOD altered TRICARE policy such that all family members of reservists activated for 31 days or more are eligible for TRICARE Prime. In conjunction with this change, DOD announced a change in the eligibility of reserve families for TRICARE Prime Remote for Active Duty Family Members. DOD stated that a legislative provision of the program that required a family member to “reside with” the servicemember would be interpreted as meaning that eligible family members resided with the servicemember before the servicemember left for their home station, mobilization site, or deployment location, and the family members continue to reside there. Under DOD authorities in the National Defense Authorization Acts for 2000 and 2001, DOD instituted several demonstration programs to provide financial assistance to reservists and family members. For example, DOD instituted the TRICARE Reserve Component Family Member Demonstration Project for family members of reservists mobilized for Operations Noble Eagle and Enduring Freedom to reduce TRICARE costs and assist dependents of reservists in maintaining relationships with their current health care providers. The demonstration project eliminates the TRICARE deductible and the requirement that dependents obtain statements saying that inpatient care is not available at a military treatment facility before they can obtain nonemergency treatment from a civilian hospital. In addition, DOD may pay a nonnetwork physician up to 15 percent more than the current TRICARE rate. About 40 percent of the problems reservists have reported relate to understanding TRICARE’s benefits and obtaining assistance when questions or problems arose. Because these problems could be reduced through improved education about TRICARE’s benefits and better assistance while navigating the TRICARE system, we recommended in September 2002 that DOD ensure that reservists, as part of their ongoing readiness training, receive information and training on health care coverage available to them and their dependents when mobilized and provide TRICARE assistance during mobilizations targeted to the needs of reservists and their dependents. DOD has added information for reservists to its TRICARE Web site and, in response to our recommendation, has begun to implement a TRICARE reserve communications plan aimed at outreach and education of reservists and their families. The TRICARE Web site is a robust source of information on DOD’s health care benefits. The Web site contains information on all TRICARE programs, TRICARE eligibility requirements, briefing and brochure information, location of military treatment facilities, toll free assistance numbers, network provider locations and other general network information, beneficiary assistance counselor information, and enrollment information. There is also a section devoted specifically to reservists, with information and answers to questions that reservists are likely to have. Results from DOD’s 2000 survey showed that about 9 of every 10 reservists had access to the Internet. DOD has begun to implement a TRICARE communications plan to educate reservists and their family members on available health care and dental benefits. The plan identifies a number of tactics for improving how health care information is delivered to reservists and their families. Under the plan, materials are to be delivered through direct mailing campaigns, fact sheets, brochures, working groups, and briefings. The plan also identifies methods of measurement that will assist in identifying the degree information is being requested and received. In March 2003, OSD distributed educational materials for beneficiary counseling assistance coordinators, reserve component staff, and others. In May 2003, the TRICARE Management Activity established a working group to improve reserve component communications. Most reservists who are not on active duty have civilian health insurance through either their own or their spouse’s employer. Estimates from DOD’s 2000 survey showed that nearly 80 percent of reservists had health care coverage when they were not on active duty and about 20 percent did not. According to DOD’s 2002 survey, an estimated 90 percent of spouses of activated reservists had private health insurance prior to activation, and 4 percent had no insurance. The other 6 percent had TRICARE coverage or some combination of TRICARE and private health insurance. While DOD requires activated reservists to use TRICARE for their own health care, using TRICARE is an option for their dependents. During mobilization, some reservists may choose to save the cost of premiums by dropping civilian insurance for their dependents and relying on TRICARE, which has no associated premium. However, doing so means that dependents must learn the benefits and requirements of a new health plan. It also means they may be unable to use the same civilian providers if these providers do not participate in TRICARE networks or accept TRICARE patients. Reservists’ decisions regarding health care coverage for their dependents are affected by a variety of factors—whether they or their spouses have civilian health coverage, the amount of support civilian employers are willing to provide with health care premiums, and where they and their dependents live. Despite the availability of DOD health care benefits with no associated premium, our prior work has shown that many reserve family members elect to maintain their civilian health care insurance during mobilizations. According to estimates from DOD’s 2000 survey, about 90 percent of reservists with civilian health care coverage maintained it during their mobilization. Reservists we interviewed often told us that they maintained this coverage to better ensure continuity of health benefits and care for their dependents. The Uniformed Services Employment and Reemployment Rights Act does not require employers to continue paying their share of health care premiums when mobilizations extend beyond 30 days. However, employers continued to pay at least their portion of health insurance premiums beyond this 30-day period for about 80 percent of the reservists we contacted who maintained their employer-sponsored coverage. DOD’s 2002 survey of spouses of activated reservists indicated that only a small percentage of reserve families had to pick up the entire premium in order to retain the member’s civilian health care coverage during activation. Specifically, the survey estimated that 35 percent of families paid the employee share of the premium, 29 percent paid no additional costs because the member’s employer paid the full health care premium, 18 percent paid no additional costs because the family was covered under the spouse’s health care plan, and 8 percent paid the full health care premium. Our surveys of reservists’ civilian employers also show that a high percentage of employers provide assistance with continued health care benefits for their activated reservists. Of the 183 employers of reservists in high tempo units who completed and returned our survey on employer support, 121 employers provided information on their health benefit policies. Of these 121 employers, 105 (88 percent) reported that they paid the full heath care premium or the employer share of the health care premium during the activation period. Of the 22 states we surveyed about pay and benefit policies for their activated reserve employees, 13 (59 percent) reported that they paid the full health care premium or the employer share of the health care premium. Most of these states provided these benefits during the entire activation period. In our prior work, we found that many reservists who did drop their civilian insurance and whose dependents did use TRICARE reported difficulties moving into and out of the system, finding a TRICARE provider, establishing eligibility, understanding TRICARE benefits, and knowing where to go for assistance when questions and problems arose. While reserve and active component beneficiaries report similar difficulties using the TRICARE system, these difficulties are magnified for reservists and their dependents. For example, 75 percent of reservists live more than 50 miles from military treatment facilities, compared with 5 percent of active component families. As a result, access to care at military treatment facilities becomes more challenging for dependents of reservists than their active component counterparts. Reservists may also transition into and out of TRICARE several times throughout a career. These transitions create additional challenges in ensuring continuity of care, reestablishing eligibility in TRICARE, and familiarizing or refamiliarizing themselves with the TRICARE system. Reservists are also not part of the day-to-day military culture and, according to DOD officials, generally have less incentive to become familiar with TRICARE because it becomes important to them and their families only if they are mobilized. Furthermore, when reservists are first mobilized, they must accomplish many tasks in a compressed period. For example, they must prepare for an extended absence from home, make arrangements to be away from their civilian employment, obtain military examinations, and ensure their families are properly registered in the Defense Enrollment Eligibility Reporting System (DOD’s database system maintaining benefit eligibility status). It is not surprising that many reservists, when placed under condensed time frames and high stress conditions, experience difficulties when transitioning to TRICARE. To further expand reservists and their family members’ access to health care, Congress is considering legislation to offer TRICARE to reservists when they are not on active duty. The legislation would entitle members of the Selected Reserve and certain members of the Individual Ready Reserve and their dependents to the same TRICARE benefits as a member of the uniformed services on active duty or a dependent of such a member. An enlisted reservist enrolled in the TRICARE program would pay an annual premium of $330 for self only coverage and $560 for self and family coverage, while a reserve officer would pay an annual premium of $380 for self only coverage and $610 for self and family coverage. (Military personnel on active duty and family members of personnel on active duty do not pay a premium for TRICARE coverage.) The legislation also would require DOD to pay premium costs incurred by reservists who choose to continue their civilian health care insurance coverage when activated. DOD would cover the civilian insurance costs up to the total cost of the reservist’s premium and would be required to pay an amount equal to TRICARE’s average cost of providing TRICARE for self and family coverage. Proponents have stated that the legislation (1) would recognize an expansion of reserve roles and missions in recent years and an increased demand placed on reservists and their families, (2) would assist DOD in recruiting and retaining reservists, and (3) would help reservists who opt to join TRICARE maintain continuity of their health care coverage. We have a number of concerns with the proposal to extend TRICARE coverage to reservists not on active duty and their family members and to require DOD to pay premium costs incurred by reservists who choose to continue their civilian health care insurance coverage when activated. First, while there has been an expansion of reserve participation in military operations, with a dramatic increase in mobilizations to support operations in Iraq, many reservists have deployed only once or not at all. According to the results of DOD’s 2000 survey, only 25 percent of reservists reported in 2000 that they had been mobilized or deployed. Of those mobilized or deployed at least once, nearly 70 percent had participated in only one operation. Since September 2001, DOD has called 300,000 reservists to active duty, representing one-fourth of the 1.2 million reservists eligible for call-up. Second, DOD officials we spoke with about the proposed legislation noted that DOD currently has not identified an overall recruiting and retention problem in the reserves and that it was too early to project the potential for future recruiting and retention problems that might result from the ongoing mobilization. They also could not tell us what effect the proposed legislation would have on the military’s ability to recruit and retain reservists. Third, as noted previously, most reservists activated prior to 2001 achieved continuity of care for their families by retaining civilian health insurance during activation. However, DOD officials said that little is known about reservists’ behavior patterns of health care usage during mobilizations since September 2001 and that it is difficult to project their behavior if the current proposal were approved. According to a DOD official, it is unknown whether younger members of the reserve force would purchase TRICARE health care coverage even at reduced rates. In addition, a high percentage of reservists’ civilian employers who currently pay some or all of health care premiums for reservists during activations could discontinue providing such assistance if DOD makes this coverage available to reservists year-round. Other concerns with the proposed legislation have also been raised. DOD officials said that creating greater uniformity of benefits between active and reserve forces could have unanticipated effects on the active component if active component members are enticed into leaving the active component and joining the reserves. The OSD Health Affairs Policy Director also noted that DOD could have difficulties tracking reservists’ premium costs in order to pay these costs during activation as required by the legislative proposal. Another concern is the stress that could be placed upon the TRICARE system. Currently, TRICARE provides care for 8.7 million beneficiaries—eligible active duty personnel, retirees, and dependents. It is not clear to what extent reservists and their eligible dependents would use TRICARE and the impact this could have on the system. Beneficiary groups have described problems with access to care from TRICARE civilian providers. In March 2003, we testified on DOD’s oversight of the TRICARE civilian provider network, noting the problems with assessing the network’s adequacy due to insufficient information. In addition, controlling rising health care costs is a major concern of DOD’s. According to a 2003 Congressional Budget Office analysis of long-term defense spending, spending for military medical care, which already makes up more than 10 percent of DOD’s operation and support costs, is the fastest growing category of operation and support spending. In this projection of the administration’s plans, annual medical spending rises by 67 percent over the 2007-2020 period, from $33 billion to $55 billion. Many of the same forces that cause national health expenditures to rise—an increase in the volume of health care services available and expanded use of new, high-cost drugs and procedures— translate into higher military medical costs. In addition, retirees and their dependents now make up a larger share of beneficiaries, increasing the average age and costs of the people who receive health coverage through DOD. Two reasons military medical costs are expected to rise dramatically over the next 5 years are (1) new benefits for military retirees over age 65 (called TRICARE for Life), which had an actuarial liability estimated at $592 billion as of September 30, 2001, and (2) a switch to an accrual accounting system—with DOD’s budget being charged each year for the expected costs of future benefits. DOD’s fiscal year 2003 budget for the defense health program was $14.8 billion. The Congressional Budget Office estimated that implementing the legislation— extending TRICARE coverage to reservists not on active duty and their family members and requiring DOD to pay premium costs incurred by reservists who choose to continue their civilian health care insurance coverage when activated—would cost a total of $466 million in 2004 and $7.3 billion over 2004-2008. The Congressional Budget Office estimated that extending TRICARE coverage to reservists who are not on active duty would cost $393 million in 2004 and $7.1 billion over 2004-2008. On the basis of DOD data, the Congressional Budget Office estimated that the provision would apply to 760,000 reservists after excluding 120,000 who work for the federal government. It also estimated that about 70 percent of qualified reservists would opt to enroll in the TRICARE program. The Congressional Budget Office estimated that requiring DOD to pay premium costs for continued civilian health care coverage during activation would cost $73 million in 2004 and $155 million over 2004-2008. According to this estimate, the amount DOD would pay reservists would cover about 60 percent of the average civilian premium. DOD estimated that the cost of extending TRICARE coverage to reservists who are not on active duty would be $5.1 billion per year. DOD’s estimate does not include the costs to pay the premiums of activated reservists’ civilian health care. DOD’s estimate is significantly higher than the Congressional Budget Office estimate due to certain assumptions concerning the number of potential beneficiaries, the proportion of potential beneficiaries that would opt to enroll in TRICARE, and the per capita costs of providing care. DOD officials told us that they used historical cost profile data to develop their cost estimates. However, we did not independently verify either the DOD or the Congressional Budget Office cost estimate. Noting the high cost of this proposed legislation, the Secretary of Defense has expressed opposition to this legislation, stating he will recommend that the President veto the National Defense Authorization Act for Fiscal Year 2004 if a provision to expand TRICARE is included. DOD survey estimates showed income change varied considerably among activated reservists, with a sizeable proportion of reservists experiencing income loss, but more than half experiencing no change or a gain in income. However, these data are questionable because it is unclear what survey respondents considered as income loss or gain in determining their financial status. For example, the number of reservists reporting income loss could be lower if respondents did not include the sum of their military pay— basic pay, special pays, allowances, and indirect compensation, such as health care benefits. Currently, DOD cannot determine the need for compensation programs to provide income protection to reservists because it lacks sufficient information on the scope and nature of income change experienced by activated reservists. More specifically, DOD lacks sufficient data on the magnitude of income change, the causes of income change, and the effects of income change on reservists’ retention decisions. Survey results showed that a higher percentage of reservists in certain groups, such as self-employed reservists and health care professionals, experienced greater income loss compared with reservists overall and that, for some, income loss or the potential for income loss is a significant factor in their decisions on whether to stay in the reserves. A number of approaches to providing income protection have been proposed, including an income insurance program, differential pay for activated federal employee reservists, and special pay for certain reserve physicians. Of these three, only the last is targeted at reservists who (1) fill critical wartime specialties, (2) experience high degrees of income loss when on active duty, and (3) demonstrate that income loss is a significant factor in their retention decisions. This is the kind of business case approach that we think is necessary to determine the need for income protection compensation programs. In the area of family support, DOD and the military services have taken steps to improve personal financial management programs. They have also identified challenges such as increasing reservists’ and spouses’ awareness of and access to personal financial management programs. However, they have not developed specific plans to address these identified needs. Further, while DOD has assessed the financial well-being of active duty members and linked financial well-being with mission readiness, it has not conducted similar assessments of the reserve force. Our review of DOD survey data showed that reservists reported having many of the same financial problems as their active duty counterparts. For instance, a higher percentage of reservists reported having such financial difficulties as bouncing checks and receiving a letter of indebtedness than have their active duty counterparts. Conducting these assessments would provide a better understanding of financial difficulties reservists encounter and the impact these difficulties have on mission readiness. Recent improvements have been made to reservists and their families’ access to TRICARE when the member is activated. In past military operations, most activated reservists retained civilian health care insurance coverage for their families during the activation period. To further expand access to health care benefits, legislation has been proposed that would provide TRICARE benefits to reservists and their family members when they are not on active duty. Furthermore, the legislation would require DOD to pay premium costs incurred by reservists who choose to continue civilian health care insurance coverage when activated. While proponents have cited a number of reasons for this legislation, concerns have also been raised. We believe these concerns may outweigh the perceived benefits and costs of the legislation. Currently, DOD lacks sufficient information to determine the need for the expanded health care benefits offered in the legislation and the implications of the proposal for reservists, active duty members, and the military health care system. DOD officials further stated that currently no problem has been demonstrated in overall reserve recruiting and retention. DOD has not yet identified problems reservists and their families have experienced with access to health care during mobilizations since September 11, 2001, such as problems in maintaining continuity of health care; the causes of these problems; and their effects on readiness, recruiting, and retention. We recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to determine the need for compensation programs aimed at addressing reservists’ income loss during periods of active duty by obtaining more complete information on the magnitude of income change, the causes of income change, and the effects of income change on reserve retention. At a minimum, these efforts should be designed to identify reservists who (1) fill critical wartime specialties, (2) experience high degrees of income loss when on active duty, and (3) demonstrate that income loss is a significant factor in their retention decisions. We further recommend that, on the basis of this information, the Secretary of Defense develop targeted compensation programs, as appropriate, to retain these reservists in the armed forces. We recommend that the Secretary of Defense direct the Secretaries of the Army, the Air Force, and the Navy and the Commandant of the Marine Corps to develop specific plans for improving reservists’ and their spouses’ awareness of and access to personal financial management programs. In developing these plans, the military services, in conjunction with the Under Secretary of Defense for Personnel and Readiness, should assess the financial well-being of reservists and determine whether reservists’ financial problems affect mission readiness. We recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to assess problems reservists have experienced since the mobilizations following the events of September 11, 2001, in maintaining continuity of health care; the causes of these problems; and their effects on readiness, recruiting, and retention. As part of this assessment, DOD should evaluate the ramifications of extending TRICARE coverage to reservists not on active duty and their family members as well as paying premium costs incurred by reservists who choose to continue their civilian health care insurance coverage when activated. DOD should also evaluate the potential impact of extending such coverage on the retention of active duty personnel and on the TRICARE system. In order to provide DOD an opportunity to determine the need for and ramifications of expanding TRICARE, Congress may wish to delay a decision on the legislative proposal to offer TRICARE to reservists and their families when members are not on active duty. Furthermore, Congress may wish to direct the Secretary of Defense to assess and report on reserve health care benefits as we have recommended in this report. In written comments on a draft of this report, DOD concurred with our recommendations. Regarding our recommendation that DOD develop targeted compensation programs to retain reservists in the armed forces, DOD stated that the department must exercise concern about paying its part-time force more than its full-time force when undertaking similar duties. As discussed in our report, we agree that equity between active component and reserve component personnel is one factor that must be considered in compensation programs that address income loss. Nevertheless, we believe that DOD could target such compensation programs appropriately by gathering more complete information on reservists’ income loss and applying the three criteria included in our recommendation. On the basis of DOD’s concurrence with our recommendation concerning reserve health care benefits, we have added matters for congressional consideration concerning the legislative proposal to extend TRICARE benefits to nonactivated reservists and their families. We believe the proposed expansion of TRICARE deserves scrutiny due to its high costs, the current lack of information on the need for this expansion of TRICARE, and its potential ramifications. An assessment of this proposed expansion of TRICARE is likely to be a complex and time-consuming undertaking. DOD’s comments are reprinted in appendix VI. We are sending copies of this report to the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness; the Secretaries of the Army, the Air Force, and the Navy; and the Commandant of the Marine Corps. We will also make copies available to appropriate congressional committees and to other interested parties on request. In addition, the report will be available at no charge at the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please call me at (202) 512-5559. Major contributors to this report are listed in appendix VII. To evaluate information on income change reported by reservists when activated for a military operation, we obtained and analyzed the results of the Department of Defense’s (DOD) 2000 Reserve Component Survey and 2002 Survey of Spouses of Activated National Guard and Reserve Component Members. We stratified the results of these surveys by pay grade group, reserve component, and for certain other groups such as type of employers. Further, we discussed the extent of income change with officials from the following offices or commands: Office of the Assistant Secretary of Defense for Reserve Affairs. Office of the Deputy Under Secretary of Defense for Military Personnel Policy, Office of Military Compensation. National Guard Bureau. National Committee for Employer Support of the Guard and Reserve. Service Reserve Forces Policy Committees. Office of the Assistant Secretary of the Army for Manpower and Reserve Affairs. U.S. Army Reserve Command, Fort McPherson, Georgia. Office of the Chief of Army Reserve. U.S. Army National Guard. U.S. Air Force Reserve. U.S. Air National Guard. U.S. Naval Reserve. U.S. Marine Corps Reserve, Quantico, Virginia. We reviewed relevant reports from the DOD Office of the Inspector General and the U.S. Army Audit Agency and our prior GAO reports and testimony. We discussed with an official from the Congressional Budget Office the estimated cost of a pay differential for federal employees who are called to active duty. We did not verify the methodology used to calculate this estimate. We analyzed current compensation and benefits policies for activated federal employees from the Office of Personnel Management. Further, we surveyed officials from 22 states between May and July 2003 and obtained their compensation and benefits policies to gain the perspective of state governments’ financial assistance and benefits for state employee reservists called to federal active duty. To determine the 22 states, we chose the 11 states with the highest total population of reservists in the state, the 5 states with the smallest total reservist population, and 6 states in the middle. We obtained a standard set of information regarding each state’s policy. We also updated information on employer compensation policies from our June 2002 report on employer support of the National Guard and the Reserve. We had surveyed 359 employers of reservists in high tempo units between November 2001 and March 2002. Due to concerns about mail contaminated with anthrax, not all completed surveys were obtained before issuance of the employer support report. We updated the data with an additional 72 surveys for a total of 183 completed surveys. Employers were not randomly selected; therefore, the results are not projectable to all employers. We also reviewed data from the Defense Manpower Data Center regarding Army Reserve Medical Corps authorized and actual fill rates for critical medical specialties and gains and losses from the Army Selected Reserve and the Army Individual Ready Reserve from 1991 to 2002 to review an Army proposal for special deployment pay. We reviewed DOD surveys on Army Reserve physicians’ experiences during mobilizations, on a rotation program to address earlier concerns about the length of deployments, and on associated catastrophic income loss. We also contacted military aid associations, including the Army Emergency Relief, the Navy-Marine Corps Relief Society, and the Air Force Aid Society, to obtain and review information on emergency loans and financial assistance provided to activated reservists on active duty. To evaluate reserve families’ readiness and awareness and use of family support programs, we reviewed DOD family policy regulations. We also reviewed DOD Web sites and other materials designed to inform servicemembers and their families about benefits. To obtain further insight into reservists’ awareness and access to family support programs, we reviewed service personal financial management regulations and policies to determine the extent to which these programs are extended to reservists and their family members. To evaluate the financial well-being of reservists, we reviewed RAND and other DOD studies. We also compared the results of the 2000 DOD survey with the 1999 DOD Survey of Active Duty Personnel. Specifically, we met with and obtained information from DOD officials from the Office of the Assistant Secretary of Defense for Reserve Affairs, the Office of the Deputy Under Secretary of Defense for Military Community and Family Policy, the military services, and reserve components. We also met with representatives from the National Military Family Association and the JumpStart Coalition for Financial Literacy to discuss challenges reservists face when called to active duty. To evaluate a legislative proposal for DOD to offer TRICARE to reservists and their families when members are not on active duty, we reviewed relevant GAO reports. We discussed health care benefits and eligibility criteria for reservists and their family members and recent improvements to military health care with DOD health care officials. We obtained cost estimates of the legislative proposal from the Congressional Budget Office and DOD, but we did not verify the methodology used to calculate the estimates. During our survey of officials from 22 states, we obtained their respective health care benefits policies to gain the perspective of state governments’ health benefits for state employee reservists called to federal active duty. We met with and obtained information from DOD officials within the Office of the Assistant Secretary of Defense for Reserve Affairs, Office of the Assistant Secretary of Defense for Health Affairs, and the TRICARE Management Activity. We completed our work for this report from March to July 2003 in accordance with generally accepted government auditing standards. This appendix describes DOD’s 2000 survey of reserve personnel and 2002 survey of spouses of activated reserve personnel. We did not participate in the design or collection of the results. The 2000 Survey of Reserve Component Personnel is a survey of Selected Reserve members of the reserve components sponsored by the Office of the Assistant Secretary of Defense for Reserve Affairs. The study population consisted of 728,347 members below flag or general officer rank and having at least 6 months of reserve duty service as of August 2000. The sample consisted of 74,487 members, and eligible respondents returned 35,223 questionnaires for a response rate of 47 percent. DOD officials believe that the response rate for the survey is as good as other similar surveys that they have conducted. However, there is a potential for bias in the estimates to the extent that respondents and nonrespondents had different opinions on the questions asked. Data were weighted by the Defense Manpower Data Center to allow the study to provide estimates for the study population or subpopulations. This was a mail-out survey, with the data collection period running from August 16, 2000, through December 29, 2000. Because this is a probability sample based on random selections, the sample is only one of a large number of samples that might have been drawn. Since each sample could have provided different estimates, confidence in the precision of the particular sample’s results is expressed as a 95 percent confidence interval (e.g., plus or minus 4 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples that could have been drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. All percentage estimates from the survey review have sampling errors of plus or minus 5 percentage points or less, unless otherwise noted. We used the weighting factors and the sampling error methodology provided by the Defense Manpower Data Center to develop 2000 estimates and sampling error estimates. In some cases, we used the estimates developed by the Defense Manpower Data Center. The 2002 survey of spouses was sponsored by the Office of the Assistant Secretary of Defense for Reserve Affairs to assess the needs and concerns of National Guard and Reserve families prior to and during activation, to assess the status of family support initiatives, and to gather data from spouses of members who have been activated since September 11, 2001. The study population for this survey consisted of 29,673 spouses of reservists activated for Operations Noble Eagle, Enduring Freedom, Bosnia, Southwest Asia, or Southern Watch. The survey was a stratified random sample consisting of 7,658 spouses. Eligible respondents returned 3,874 completed surveys for a response rate of 51 percent. DOD officials believe that the response rate for the survey is as good as other similar surveys that they have conducted. However, there is a potential for bias in the estimates to the extent that respondents and nonrespondents had different opinions on the questions asked. As with the 2000 survey, the 2002 spouse survey is also a probability sample based on random selections, so the sample is only one of a large number of samples that might have been drawn. For this survey, we express confidence in the precision of our estimates as a 95 percent confidence interval. All percentage estimates from the 2002 survey have sampling errors of plus or minus 5 percentage points or less, unless otherwise noted. To produce estimates of the study population, the sample data were weighted to reflect the sample design and to adjust for nonresponse. Because weighting factors were not provided to us for use with the data, we computed weighting factors as the ratio of the population to respondents for each stratum. This appendix discusses existing pay policies and protections, as well as emergency aid services, that may help mitigate reservists’ financial hardship during activation. While basic military compensation, in constant dollars, remained fairly steady during most of the 1990s, it has increased in recent years. As a result, reservists activated today are earning more in the military than they did just a few years ago. (See fig. 2.) For example, an enlisted member in pay grade E-4 who is married with no other dependents would earn $3,156 per month in basic military compensation in fiscal year 2003, compared with $2,656 per month in fiscal year 1999, or a 19 percent increase. These figures are calculated in constant 2003 dollars to account for the effects of inflation. In addition to increases in basic military compensation, Congress in April 2003 increased family separation allowance from $100 to $250 per month and imminent danger pay from $150 to $225 per month. These increases expire September 30, 2003. The Senate and House are also considering a new special pay of up to $1,000 per month that would compensate servicemembers for frequent or lengthy deployments. In addition to these increases in pay, other pay policies and protections may help to mitigate reservists’ financial hardship during deployment. For example: The Soldiers’ and Sailors’ Civil Relief Act caps debt interest rates at 6 percent annually for debts incurred prior to activation and provides many other financial protections if members can show that their ability to pay is materially affected by being activated. Legislation currently before Congress would amend the act to expand certain protections for activated servicemembers. Income that servicemembers earn while mobilized in combat zones is tax- free. The President designates combat zones. Military pay received while in these combat zones is excluded from gross income and not subject to federal income tax. Legislation currently before Congress would expand combat zone tax exemptions to any designated contingency operation. For Iraqi Freedom, Noble Eagle, and Enduring Freedom, DOD has authorized reservists to receive both a housing allowance and per diem for their entire period of activation, up to 2 years. Military Reservist Economic Injury Disaster Loans up to $1.5 million are available through the Small Business Administration to help small businesses meet necessary operating expenses and debt payments until a key employee—including the owner—is able to return from active duty to the business and normal operations resume. Servicemembers who are experiencing financial hardship can also obtain emergency assistance in the form of interest-free loans or grants from private aid organizations to pay for basic living expenses such as food or rent during activation. The Army Emergency Relief, the Air Force Aid Society, and the Navy-Marine Corps Relief Society are nonprofit charitable organizations that provide financial, educational, and other assistance to servicemembers and their families who are in need. These organizations provide assistance to active component members, reservists, and retirees. In 2002, the Navy-Marine Corps Relief Society distributed approximately $41 million to almost 51,000 individuals, including $1.5 million provided due to inadequate income to meet basic living expenses such as rent or mortgage, food, and utilities. The Navy-Marine Corps Relief Society did not track separately the assistance it provided to reservists. The Air Force Aid Society distributed over $24 million in 2002 to more than 34,000 individuals. Of this amount, $600,000 was provided to reservists. The Aid Society reported significant increases in reservists’ receiving emergency assistance and phone card use. About 140 reservists received emergency assistance for basic living expenses because they experienced loss of civilian pay or military pay problems. Army Emergency Relief distributed $41 million in 2002 to more than 56,000 people. At least $850,000 went to about 900 reservists for emergency travel assistance, vehicle repairs, rent or mortgage assistance, and as an income supplement while waiting for delayed military pay when called to active duty. This appendix displays the results of our survey of 22 states to obtain their policies regarding pay and benefits offered to state employees who are called to federal active duty. It describes the military leave policies and financial assistance programs such as state pay differentials for activated state employees who experience income loss as a result of federal activation. It also describes each state’s policy on the continuation of dependent health care coverage and other reported benefits. To determine the 22 states, we chose the 11 states with the highest total population of reservists in the state, the 5 states with the smallest total reservist population, and 6 states in the middle. We conducted our survey of the 22 states between April and July 2003. In summary, we found that 19 of the 22 states surveyed offer pay differentials to employees who are on military leave without pay and can document a loss of income. Of these 19 states, 16 are mandated under state law or executive order to provide financial assistance, while 3 states—Colorado, Georgia, and Texas—allow the individual state agencies to offer pay differentials at the agencies’ discretion. The other three states do not offer pay differentials to activated employees on unpaid leave. The manner in which states calculate the amount of the pay differential varies. For example, 7 states calculate the amount of the pay differential as the difference between an employee’s civilian salary and basic military pay, not including military special pays and allowances. In contrast, 10 states include military special pays, allowances, or both in the calculation, which can lower the differential amount that the state pays to its activated employees. Georgia allows state agencies to formulate their own differential calculation, while Pennsylvania offers a flat rate monthly stipend to all activated employees. States offering financial assistance do so for a period of time ranging from 90 days in Colorado to the duration of the activation in states like Florida and Alabama. We did not find a correlation between the size of a state’s reservist population and the type or extent of financial assistance the state offers. The results of our survey are presented in table 4. This appendix provides an analysis of data from the 2002 DOD survey of spouses of activated reservists concerning preactivation activities of spouses. Based on spouses’ self-reported feelings of being prepared or unprepared upon receiving a notice of activation for the military member, we compared their responses to questions concerning preactivation activities—volunteering or participating in unit family readiness programs or groups, attending preactivation briefings, receiving preactivation materials. We also compared their responses to questions concerning other factors that could affect preparedness, such as being assigned a military point of contact and the amount of advance notice received prior to activation. Finally, we compared their responses to a question concerning how well they have coped with the activation. The results of our analysis are presented in table 5. Kelly Baumgartner, Brenda S. Farrell, Thomas W. Gosling, Krislin M. Nalwalk, Jennifer R. Popovic, Mark F. Ramage, Loch-Hung Leo Sze, and Nicole Volchko made significant contributions to this report.
Since the 1991 Persian Gulf War, National Guard and Reserve personnel have been deployed to a number of contingency operations. Since September 2001, about 300,000 reservists have been called to active duty, and the pace of reserve operations is expected to remain high for the foreseeable future. House Report 107-436 accompanying the Fiscal Year 2003 National Defense Authorization Act (P.L. 107-314) directed GAO to review compensation programs for reservists serving on active duty. GAO evaluated information on income change reported by reservists when activated; reserve families' readiness for call-ups and their awareness and use of family support programs, focusing on personal financial management; and a legislative proposal for the Department of Defense (DOD) to offer TRICARE, the military's health care program, to reservists and their families when members are not on active duty. DOD lacks sufficient information on the magnitude, the causes, and the effects of income change to determine the need for compensation programs targeting reservists who (1) fill critical wartime specialties, (2) experience high degrees of income loss when on extended periods of active duty, and (3) demonstrate that income loss is a significant factor in their retention decisions. Such data are critical for assessing the full nature and scope of income change problems and in developing cost-effective solutions. DOD self-reported survey data from past and current military operations indicate that activated reservists have experienced widely varying degrees of income change. While many reservists lost income, more than half of reservists had either no change or a gain in income. However, survey data are questionable primarily because it is unclear what survey respondents considered as income loss or gain in determining their financial status. DOD has placed greater emphasis on preparing reservists' families for potential call-ups, yet survey data show that one-third of spouses do not feel prepared, over half of reservists are not aware of family support programs, and more than 90 percent of spouses do not use these programs. Personal financial management, one of DOD's core family support programs, illustrates the continuing challenges DOD faces in providing outreach to reservists. The 2000 survey data showed that 61 percent of reservists did not know whether personal financial management services were available. The survey also showed that reservists have financial problems similar to their active duty counterparts. DOD is taking steps to improve personal financial management, but it has not assessed the financial well-being of reserve families, assessed the impact of reservists' financial problems on mission readiness, or determined how to tailor its programs to reservists. Available DOD data do not identify a need to offer TRICARE to reservists and their families when members are not on active duty. Estimates from DOD's 2000 survey showed that nearly 80 percent of reservists had health care coverage when they were not on active duty. This rate is similar to that of comparable groups within the overall U.S. population. DOD has expressed concern over the estimated costs of this proposal. Cost estimates range up to $5.1 billion a year. However, DOD has not fully assessed the ramifications of this proposed legislation, including the impact on recruiting and retention, the effects on active duty personnel, the extent reservists and their families might participate in such a program, or the impact on the TRICARE system. In addition, a high percentage of reservists' civilian employers who currently pay some or all of health care premiums for reservists during activations could discontinue providing such assistance. A number of recent improvements have been made to reservists and their families' health care when members are activated. However, DOD lacks data on problems reservists and their families have experienced with health care since the mobilizations following September 11, 2001; the causes of these problems; and their effects on readiness, recruiting, and retention.
In 1867, Congress enacted legislation that allowed the government to pay awards to individuals who provided information that aided in detecting and punishing those guilty of violating tax laws. Initially, Congress appropriated funds to pay these awards at the government’s discretion. In 1996, Congress expanded the scope of the program to also provide awards for detecting underpayments of tax and changed the source of awards to money IRS collects as a result of information whistleblowers provide. The Tax Relief and Health Care Act of 2006 created an expanded whistleblower award program to complement the existing whistleblower program. Table 1 shows the distinctions between the two programs, which we refer to as the original and expanded programs. This report focuses on the expanded program. The act also directed IRS to create the Whistleblower Office, which is responsible for managing and tracking whistleblower claims from the time IRS receives them to the time it closes them, either through a rejection letter or an award payment. The Secretary of the Treasury is required to submit an annual report to Congress on the activities and outcomes of both the original and expanded whistleblower programs. As of May 2011, the Whistleblower Office had 20 staff members. IRS’s review of whistleblower claims involves a series of steps and IRS can reject claims throughout the process. Although IRS’s Whistleblower Office manages the whistleblower program, conducts initial reviews of claims, and makes award determinations, IRS’s operating divisions are responsible for investigating claims and conducting examinations under the expanded program. The Office of Chief Counsel is not involved in every whistleblower claim but reviews whistleblower claims for legal issues when the Whistleblower Office or operating divisions request such assistance. IRS’s Criminal Investigation (CI) unit also investigates fraud identified by whistleblower claims. A claim may transfer from CI to an operating division if CI is initially involved but declines to pursue the claim. Conversely, an operating division can involve CI if it determines during an examination that there is a criminal component to a claim. While the act establishing the expanded whistleblower program does not offer specific protections for whistleblowers, the Whistleblower Office has several policies and procedures to protect the identity of a whistleblower. Whistleblowers may not submit claims anonymously, as submissions must be made under penalty of perjury and IRS needs to assess the credibility of whistleblowers and the information they provide. Likewise, certain individuals, such as some federal employees, are prohibited from receiving whistleblower awards and the Whistleblower Office must know the identity of the whistleblower to enforce this restriction. Table 2 is a simplified outline of the whistleblower claim process for the expanded program. Whistleblower awards are mandatory if IRS takes administrative or judicial action that results in collected proceeds based on the whistleblower’s information. IRS is clarifying the definition of collected proceeds. Currently, the Internal Revenue Manual section on whistleblower awards defines collected proceeds as only new monies collected. Recently, IRS issued proposed regulations that would clarify the definition of collected proceeds to include denials of refunds and reductions in overpayment credit balances when calculating a whistleblower’s award. If IRS pays an award to a whistleblower, its policy is to withhold 28 percent in tax from all whistleblower payments, as award payments are taxable income. IRS withholds tax to reduce the risk of tax underpayment on what can potentially be large amounts of income. At the federal level, there are several other agencies that offer awards for those who bring forth information that could lead to the government recouping money. The Department of Justice receives allegations of fraud against the government under the False Claims Act, although tax cases are specifically excluded. The False Claims Act includes a qui tam provision that allows whistleblowers to pursue claims on behalf of the government if the government elects not to proceed on the claims brought by the whistleblower. The Centers for Medicare and Medicaid Services offers awards to those who provide information on health care fraud. The Securities and Exchange Commission and the Commodity Futures Trading Commission are each implementing whistleblower programs and consulted IRS for advice. Three states—New York, Florida, and Texas—also have tax whistleblower reward programs. New York’s program has a tax qui tam provision that was enacted in August 2010. Oregon also has a tax whistleblower reward statute, but the program is inactive. Whistleblower claims can take years to go through the IRS review and award determination process. For example, as of April 25, 2011:  about 66 percent of claims submitted in the first 2 years of the program, fiscal years 2007 and 2008, were still in process; less than 7 percent of claims submitted in fiscal years 2007 and 2008 that were still in process were in the Whistleblower Office final review or Whistleblower Office award evaluation steps; and  447 claims submitted in fiscal year 2010 had been in the Whistleblower Office initial claim review step at least 200 days. For each year since 2007, table 3 shows the number of claims at each step of the review process as tracked within E-TRAK, a claim management information system IRS developed and launched in January 2009. The table does not include claims receiving awards because of IRS’s concerns about disclosing tax information. According to Whistleblower Office and operating division officials, it can take IRS significant time to review and examine whistleblower claims for various reasons.  Some whistleblower claims are highly complex and are submitted with large amounts of supporting documentation. Evaluating large amounts of data is time-consuming.  Both the Whistleblower Office and the SMEs need to understand the relationship between a whistleblower and a target taxpayer in order to make determinations about the qualifications of the claim. For example, certain individuals are not eligible for awards under the expanded whistleblower program, including federal employees who learn of tax noncompliance in the course of their work activities or individuals who are current representatives, such as attorneys or accountants, of a targeted taxpayer.  SMEs review information that whistleblowers provide to determine if it may be tainted, meaning it may be subject to attorney-client privilege or any other legal protections that would preclude IRS from using it in an examination. If SMEs determine that information may be tainted, the Office of Chief Counsel reviews the claim and determines which documents should and should not be forwarded to an examination team.  SMEs can request debrief meetings with whistleblowers to clarify the tax noncompliance issues alleged or to determine the source of submitted information to ensure it is not privileged. According to operating division officials, arranging and holding these meetings can add time to the SME review process; for example, if IRS counsel is not immediately available or whistleblowers need to arrange to travel to an IRS office.  SMEs have other work priorities that may delay their review of whistleblower claims. SMEs may have expertise in specific areas of tax compliance, such as employment tax or estate and gift tax. LB&I, SB/SE, and TE/GE have between 7 and 10 SMEs each; they do not work exclusively on whistleblower claims and support other examinations and IRS programs.  Within the examination step, operating divisions do not prioritize whistleblower claims; they are treated the same as all other examinations. According to IRS officials, each claim should rise on its own merits alongside other cases that have been selected for examination by other programs. After the examination step, whistleblowers will likely still have to wait several years before IRS can determine if they are due an award due to factors outside the Whistleblower Office’s control. Taxpayers can appeal IRS’s assessment of tax, and if a taxpayer and IRS cannot reach agreement on the outcome of the case through the appeals process, the taxpayer may have the case reviewed by the U.S. Tax Court, U.S. Court of Federal Claims, or a U.S. district court. Furthermore, the Whistleblower Office generally does not pay claims until after IRS collects all proceeds from taxpayers, the 2 years taxpayers are granted to request refunds of their payments has elapsed, and in some cases, IRS has completed all taxpayer examinations resulting from a single award claim form (Form 211). Whistleblower Office officials said that the 2-year wait was important because taxpayers, regardless of whether they were the subject of a whistleblower investigation, have the right to request a refund, even on issues that whistleblowers identified. Likewise, the officials said that waiting until all claims under one submission are complete can be to the benefit of whistleblowers if, for example, claims only meet the disputed tax amount criteria for the expanded program when considered in aggregate. Other than for claims being appealed, IRS classifies these types of claims in E-TRAK as suspended. Table 3 provides data on the number of claims that were in suspended status as of April 25, 2011. We also identified other factors that could affect claim processing times. As discussed, Whistleblower Office analysts and SMEs review the relationship of a whistleblower to a targeted taxpayer when assessing the credibility of information whistleblowers provide. Although Form 211 asks whistleblowers to explain their relationship to target taxpayers, the question is part of a broader question asking whistleblowers to describe the documents they provided. Operating division SMEs told us that sometimes the relationship information is not provided or is included within the attached documents, where it can take significant time to find and understand the relationship. Furthermore, Form 211 does not ask other questions that help IRS evaluate whistleblowers’ submissions, such as if the whistleblowers have supplied the same information to other government agencies, submitted all information they have supporting a claim, or are federal employees. Operating division officials told us that having this relationship and other information more clearly identified at the beginning of the whistleblower claim review process could help them process claims more efficiently. Although table 3 highlights the length of time taken to review claims, the Whistleblower Office does not collect complete and accurate data in E- TRAK about several aspects of claims processing that could be used to manage the whistleblower program. For example, the Whistleblower Office and operating divisions do not have complete data on the length of time claims spend at each step of the review process to inform the decision making for establishing appropriate review time targets. We requested aggregate data on the median time claims spend in each step by fiscal year of claim receipt and data on how often the Whistleblower Office and subject matter experts complete the initial reviews within a given number of days, but Whistleblower Office officials told us time data from E-TRAK would be incomplete for various reasons. First, the Whistleblower Office does not update E-TRAK with data on time taken for each step for all claims. If one submission includes claims for multiple taxpayers, the Whistleblower Office updates time information for only one master claim within the submission and references all related claims to the master claim. E-TRAK records time data for related claims only if the time in a step for a related claim diverges from that of the master claim. Without significant data analysis, Whistleblower Office officials are not able to determine how often this divergence occurs. Therefore time data cannot be reported on a per-claim or per- whistleblower-submission basis, but can be reported as a combination of the two. Second, IRS did not consistently record time data for submissions before the introduction of E-TRAK in January 2009. Time data on claims that completed each step before this date are incomplete; while time data may have been recorded for some submissions, it was not required for all submissions. Whistleblower Office officials stated that E-TRAK was designed to be a claim management tool to track claim progress rather than one designed to report and monitor overall program performance. According to one Whistleblower Office official, IRS does not use aggregate time information in the day-to-day operations of the program and, therefore, did not build these capabilities into E-TRAK when designing it. Because E-TRAK already has the data field available for tracking time information, the cost of tracking such information for all claims would be limited to the time needed for analysts to input the additional data field in the claim file. Other aspects of E-TRAK limit the accuracy of Whistleblower Office data. For example, E-TRAK may show more time than is accurate for some claim review process steps because of E-TRAK’s method for accounting for certain events. The Whistleblower Office can perform an initial review and assign a claim to a SME for review. If a SME later returns the claim to the Whistleblower Office to be reassigned to a different operating division, E-TRAK does not reset the day count on how long the claim has been with the Whistleblower Office. E-TRAK will show the day count for Whistleblower Office initial review as the time the claim was received until the time the claim was reassigned to the second operating division for review. Similarly, if a SME requests legal advice from Chief Counsel’s Office, E-TRAK continues to count the time the claim is with Chief Counsel’s Office as being with the SME. As such, E-TRAK data can make it appear that claims spend more time in certain steps than they actually spend, making it difficult for management to have an accurate picture of the program’s operations and make informed resource allocation decisions. The Whistleblower Office only began tracking the point in the process at which whistleblower claims were rejected in January 2009, when E-TRAK was introduced. As table 2 showed, IRS can reject whistleblower claims at almost any point in the process. For example, claims may not fit the criteria for the award program, IRS may already have the information the whistleblower submitted, or an examination may result in no change in tax assessed, among other reasons. Table 4 shows the breakdown of when in the process IRS rejected claims. Of the claims where the rejection step was tracked, over half were rejected after examination in the Whistleblower Office final review. All claims that were rejected before January 2009 are labeled as not tracked in table 4. Although the Whistleblower Office has begun to track the step in the claim review process at which claims are rejected, E-TRAK does not include data fields for tracking the reasons why claims are rejected, although the information is contained in the text fields of the claim files. Without reviewing all closed claims, Whistleblower Office management cannot know how frequently claims are rejected for each reason. Tracking this information could help the Whistleblower Office make some program management or resource allocation decisions and in reporting information. For example, whistleblower attorneys we interviewed were concerned that claims that take years to process risk being rejected because the statute of limitations for assessment may expire before IRS completes an examination. Whistleblower Office officials could not provide E-TRAK data on the exact number of times claims are rejected because the statute has expired because E-TRAK does not track why claims are rejected, but they stated that it is not a frequent outcome. Without data in E-TRAK on rejection reasons, the Whistleblower Office cannot know how frequently claims are rejected because the statute has expired. Whistleblower Office officials said that while this information would be helpful, collecting it is not yet a priority. Furthermore, IRS could not provide data on specific reasons why claims were suspended because E-TRAK only tracks this information in the comments section of claim files, which do not require standardized language to allow for accurate searching, according to a Whistleblower Office official. Without this data in E-TRAK, Whistleblower Office officials did not know how many claims were in the 2-year period during which the taxpayer can request a refund. Having such information may aid the Whistleblower Office in planning for future work related to likely award payments. Adding a field to E-TRAK to capture both reasons why claims are in suspended status and why they were rejected would likely require limited resources to reprogram E-TRAK. Additional limited resource needs would include the time needed for analysts to input the reason when updating the claim file. Having more complete data available to Whistleblower Office management would be consistent with key internal control standards for maintaining relevant and reliable information to help agencies achieve their objectives. Without complete and accurate data on claim processing time, the Whistleblower Office may not be able to identify certain aspects of the program, if any, that could be improved to increase claim processing efficiency. Moreover, according to IRS’s overall strategic goals for 2009-2013, the agency should act quickly to initiate compliance contacts, complete audits, and collect taxes in order to reduce the administrative burden on IRS and reduce overall costs, such as penalties and interest, for the taxpayer. This lack of complete data limits the Whistleblower Office’s ability to provide program information to Congress and the whistleblower community, which may erode confidence in the program. Whistleblower claims can take years to process due in part to steps (some required) outside the Whistleblower Office’s control, such as examinations of taxpayers’ returns, taxpayer appeals, and taxpayer rights to request a refund up to 2 years after making a payment. However, the Whistleblower Office can do more to manage the time taken for the parts of the process it does influence. The Whistleblower Office and some operating divisions have time targets for their initial claim reviews; however, other operating divisions do not have targets and the Whistleblower Office does not have a systematic process to check in on claims once they are with the operating divisions for review. To monitor the time taken for the Whistleblower Office initial claim review step, the Whistleblower Office established a target of 60 days to review a claim. Claims in the Whistleblower Office initial review step more than 60 days are flagged in E-TRAK, which triggers an inquiry by Whistleblower Office analysts and management to determine and validate the reason for the delay. SB/SE and CI have targets for SME reviews, at which point claims that eclipse the target are flagged for follow-up. SB/SE’s target, which was formally established in March 2011, is a series of 30-day targets for various activities of the SME review process, such as the process for reviewing information for taint concerns and optional debrief meetings with whistleblowers. SB/SE’s overall target is 240 days and CI’s target is 90 days to perform the initial SME review. Whistleblower Office officials could not provide complete data on how often claims meet these targets. LB&I and TE/GE do not have targets for how long initial reviews should take, although TE/GE policy directs SMEs to follow up on all claims at least once quarterly and LB&I SMEs report to their managers on claims over 200 days old. The Whistleblower Office does not have a systematic process to check in with the operating divisions to review claims based on the length of time they have been in the SME review step, and the operating divisions do not have full access to E-TRAK to be able to generate reports on claims assigned to them. Without a systematic process to check in on all claims, the Whistleblower Office risks having claims not receiving the attention or resources they need to be completed, and operating division management may not have the information needed to make effective SME resource allocation decisions. Whistleblower Office officials told us they send a list of claims inventory to each operating division monthly, ordered by oldest claim first. They further stated that this report is only for informational purposes because the Whistleblower Office does not have the resources to check in with the operating divisions regularly on specific claims. Operating division officials told us they do not receive this report monthly but may receive it quarterly, or sometimes less frequently. Some SMEs have had access to E-TRAK to update information since September 2010, but they are limited in what information they can input or search, making it incumbent on the Whistleblower Office to provide to them certain data about assigned claims. The Whistleblower Office plans to allow SMEs greater access to E-TRAK in the future. For example, LB&I officials told us they are working with the Whistleblower Office to expand their E-TRAK access to allow them to directly run their own reports from E-TRAK, including reports that could show claims that have been in the SME review step the longest. IRS is limited in what information it can share with whistleblowers and other stakeholders throughout the whistleblower claim process. Section 6103 of the Internal Revenue Code prohibits the unauthorized disclosure of tax information. According to IRS, disclosing to a whistleblower that IRS is examining a taxpayer reveals tax information; therefore, IRS does not inform whistleblowers on the progress of their claim other than to confirm that the claim is either open or closed. Furthermore, IRS does not publicly report or comment on specific whistleblower awards, which it also considers to be tax information. IRS will report only on aggregate whistleblower award information once the Whistleblower Office has paid a number of awards sufficient to avoid improper disclosure. Because section 6103 restricts IRS in the amount of information it can share with whistleblowers and whistleblower claims can take years to resolve, whistleblowers may not hear from the Whistleblower Office for years once claims are accepted. According to Whistleblower Office officials, even though IRS tells whistleblowers about the restrictions on providing status updates and the potential for claims to take years to complete, the Whistleblower Office fields numerous calls daily from whistleblowers asking for updates on the status of their claims. Several times per month, the Whistleblower Office also responds to members of Congress asking for status updates on behalf of whistleblowers who are their constituents. The Whistleblower Office responds to these requests only by stating if a claim is open or closed. Responding to these types of requests diverts Whistleblower Office resources from processing claims. During the Whistleblower Office and SME initial reviews and examination, IRS has little contact with whistleblowers. Operating divisions may offer debrief meetings to whistleblowers to clarify information about their submissions, but these meetings may be the only interaction between IRS and whistleblowers until IRS rejects a claim or decides to issue an award. Examiners do not actively involve whistleblowers in their work because they need to build their case independent of the whistleblower’s involvement to be able to corroborate the information provided and to ensure they do not receive tainted information. There are some statutory exceptions to section 6103 that allow IRS to disclose tax information when it is necessary in conducting investigations and gathering information to administer the tax code. Under section 6103(k)(6), IRS may disclose taxpayer return information to a whistleblower to the extent necessary for investigative purposes. Another exception, section 6103(n), allows IRS to enter into contracts with outside parties for services for purposes of tax administration. IRS could enter into a section 6103(n) contract with whistleblowers for analytic services and could disclose tax information necessary to obtain those services. Whistleblowers who enter into section 6103(n) contracts must comply with IRS’s safeguards of tax information and are subject to statutory civil and criminal penalties for unauthorized disclosure, which include fines and jail time. If IRS discloses tax information to whistleblowers under section 6103(k)(6), whistleblowers would not be subject to penalties for unauthorized disclosure. The decision to enter into section 6103(n) contracts rests with the operating divisions; it is not directed by Chief Counsel or the Whistleblower Office, although they may provide advice to the operating divisions. Section 6103(n) contracts are intended to be used rarely by IRS in processing whistleblower claims, and as of April 28, 2011, IRS had not entered into any contracts with whistleblowers. Operating division officials stated they have not yet had a claim that necessitated this increased level of interaction with a whistleblower to gather information about the taxpayer. According to operating division, Chief Counsel, and Whistleblower Office officials, IRS does not have specific criteria for when a section 6103(n) contract should be offered to a whistleblower, other than it should be used rarely. According to IRS officials, each claim needs to be examined based on its facts and circumstances and generally IRS has the authority and tools to collect any information that a whistleblower could bring forward. Although no section 6103(n) contracts have been offered, IRS officials told us that one situation where a section 6103(n) contract would be useful is if, in the course of an examination, a taxpayer provided documents or testimony to IRS that contradicted information a whistleblower provided. IRS agents could use a section 6103(n) contract to share some tax information with the whistleblower in investigating the inconsistency. Also, rejection letters IRS sends to whistleblowers do not state why IRS denied a request for an award. IRS officials told us that to provide the reason would violate section 6103. For example, the Whistleblower Office may reject a claim because an examination did not result in an additional tax assessment, but sharing this fact with the whistleblower discloses that IRS conducted an examination. Whistleblowers whose claims for awards are denied can challenge IRS’s decision in U.S. Tax Court, although it is uncertain if they will learn the reason for the claim rejection during the appeal process. According to Whistleblower Office officials, whistleblowers have appealed more than 20 award denials under the expanded whistleblower program and they expect the frequency of these appeals to increase. According to whistleblower attorneys we interviewed, whistleblowers can be frustrated by the lack of communication from IRS regarding their claims. Because some whistleblowers risk their careers by filing a claim, they want to know that IRS is maximizing the information they provide. The attorneys said that IRS not interacting with the whistleblower for long periods of time and not using whistleblowers as resources during investigations discourages whistleblowers and may deter some from coming forward with claims, although we could not verify the latter point. The Director of the Whistleblower Office told us that many of the steps IRS takes in the whistleblower process, including limiting interaction with the whistleblower, are aimed at protecting all interested parties—the privacy of the taxpayer’s information, the identity of the whistleblower, and the integrity of the IRS examination. For example, IRS examiners need to build cases independent of whistleblowers and corroborate all of the information whistleblowers provide. This independent process ensures that examinations are not overly influenced by whistleblowers who have a financial stake in the outcome of examinations; that the identity of a whistleblower is not disclosed; and that taxpayers receive fair and defensible examinations. One mechanism through which the Whistleblower Office communicates program progress and outcomes to the whistleblower community is the Whistleblower Office’s annual report to Congress, which outlines the program’s operations for a given fiscal year. This report, which is required by the act that established the Whistleblower Office, is to include an analysis of the program’s operations and outcomes and any legislative or administrative recommendations on how to improve the program. The act does not specify what data IRS should include in the report. The reports issued to date contain limited data on claims submitted to the expanded whistleblower program. For example, the 2010 annual report, the most recent report available, included the number of whistleblowers and the number of taxpayers identified, but did not provide data on the time taken for claims to move through the process or specific information on rejected claims. The lack of such data limits Congress’s ability to effectively oversee the program. Reporting such additional data could also improve the transparency of the program, which may result in additional whistleblowers coming forward. As IRS begins paying awards under the expanded whistleblower program, some in the whistleblower community are frustrated by some issues that they see as unfair to whistleblowers. For example, according to whistleblower attorneys we spoke with, net operating loss (NOL) carryforwards remain an issue with the whistleblower program because they are excluded from the definition of collected proceeds. If a whistleblower’s information results in a reduction in NOL, IRS may not realize a financial benefit for years until the company has a positive tax liability. If the NOL is not exhausted within 10 years or the taxpayer goes bankrupt, IRS may never realize a financial benefit. When whistleblowers bring information to the IRS, they may not know the NOL position of the taxpayer on whom they are blowing the whistle. According to whistleblower attorneys, denying an award because a targeted taxpayer has a NOL carryover is inherently unfair if IRS eventually receives a financial benefit when the NOLs are exhausted. Some of the attorneys noted that this issue may discourage whistleblowers from coming forward because it adds additional uncertainty to the process and may make submitting a claim not worth the risks to their careers. IRS officials told us that they plan to develop further guidance on collected proceeds and NOLs. Furthermore, according to the attorneys, IRS’s 28 percent tax withholding policy on expanded whistleblower program award payments could result in IRS overwithholding taxes for some whistleblowers, especially those who are represented by attorneys. Attorney fees, which may be 30 percent or more of the total award, are deductible from gross income and reduce the taxable amount of an award. IRS previously did not withhold taxes on payments made under the original whistleblower program, where awards have been capped at $10 million, but it has recently begun withholding on any awards totaling over $10,000. Overwithheld funds can be refunded when the whistleblower files a tax return for the tax year of the award, but there could be a year or more between award payment and the refund of the overwithheld portion of the award. IRS does not have a process in place to negotiate an adjusted withholding rate with whistleblowers based on their individual circumstances because the ability to deduct attorney fees is dependent on whistleblowers paying their attorney after receiving awards, which may not always happen. Whistleblower Office officials told us they would rather have a single rate that applies to all whistleblowers paid more than $10,000 than become involved in the independent relationship between whistleblowers and their attorneys. Federal and state whistleblower programs we reviewed have features with potential benefits that could improve IRS’s expanded whistleblower program. Whistleblower attorneys we interviewed also suggested changes they thought could improve the program. Based on these program reviews and interviews, we compiled options that could apply to IRS’s whistleblower program, analyzed their potential advantages and disadvantages, and identified strategies that could mitigate the disadvantages. These options, along with the advantages, disadvantages, and mitigation strategies, are presented in table 5, approximately in order of their place in the whistleblower claim review process. While there are potential advantages to all identified options, it is difficult to determine if the advantages outweigh the disadvantages for many options. For options that could involve the disclosure of tax information, Treasury guidance states that any proposed exception to section 6103 must demonstrate substantial benefits. Whether informing whistleblowers about why their claims were rejected would produce benefits, such as fewer appeals, is unclear. The Director of the Whistleblower Office did not see net benefits from developing criteria on when section 6103(n) contracts would be appropriate or desirable, due to the varying facts and circumstances of whistleblower claims. Likewise, it is unclear whether greater Whistleblower Office claim vetting would improve the efficiency of investigations and what additional resources might be needed. Adding a qui tam provision—which allows autonomy for whistleblowers and their counsel to pursue claims independently in court after the agency chooses not to pursue—could encourage IRS to make more timely decisions on whether to pursue a claim. However, a qui tam provision would alter the tax examination process in uncertain ways. Because the suit would likely be focused on the issue identified by the whistleblower, IRS officials said a qui tam provision might favor maximizing the whistleblowers award rather than identifying the correct tax liability. The goal of the expanded whistleblower program is to encourage whistleblowers to come forward with information on substantial tax underreporting that, collectively, could help IRS reduce the tax gap and encourage greater voluntary compliance. For the program to be successful, whistleblowers need to have confidence in the program’s processes and outcomes. IRS’s claim review process is designed to ensure the integrity of the program, and the many steps involved can take years to complete. Some of the steps in the process are necessarily outside the Whistleblower Office’s control in order to, for example, protect the independence of examinations and avoid superseding other enforcement priorities. However, without more complete data about claim processing time and outcomes, IRS has limited information about the efficiency of the program. Such data could help IRS management assess the efficiency of current processes and evaluate potential improvements. In addition to collecting more complete data, establishing time targets for all operating division initial reviews and following up on claims that exceed these targets could serve to indicate the priority whistleblower claims should receive, set expectations for the length of time they should generally take to review, and focus attention on claims exceeding time targets. Other steps could improve whistleblower submissions and reporting to Congress. Collecting additional information on Form 211 could aid IRS in evaluating whistleblowers’ credibility and perhaps speed up the claim review process. Including more information in the annual Whistleblower Office report to Congress could enhance Congress’s ability to oversee the program and increase public confidence in the program, which could encourage more whistleblowers to submit claims. To improve the effectiveness of IRS’s expanded whistleblower program, we recommend the Commissioner of Internal Revenue direct the Whistleblower Office Director to take the following seven actions: record time-in-step information for all claims by identified taxpayer in E-TRAK;  adjust E-TRAK’s tracking feature to more accurately count the number    establish a process by which the Whistleblower Office routinely of days claims remain in each step; track the reasons for claim rejections by broad categories; track the reasons claims are listed as suspended by broad categories; follows up on claims that have been in the operating division SME initial review step more than a targeted number of days; redesign Form 211 to include stand-alone questions on the following information:    whether the whistleblower has submitted the information to any the relationship of the whistleblower to the target taxpayer, the employer of the whistleblower, other federal or state agencies, and  whether the whistleblower has included all information relevant to  provide additional summary statistics in future annual reports to Congress, including data on the length of time claims remain at each step of the review process, data on the length of time from claim receipt to payments, reasons for claim rejections, aggregate information on awards paid, and total amount of whistleblower payments. Further, we recommend that the Commissioner of Internal Revenue direct the Commissioners of LB&I and TE/GE to develop targets for how long SME reviews should take before being flagged for follow-up. We provided a draft of this report to the Commissioner of Internal Revenue and offered other agencies we spoke with the opportunity to comment on the draft. IRS and SEC provided technical comments, which we incorporated into the report as appropriate. We received written comments from IRS’s Deputy Commissioner for Services and Enforcement, which are reprinted in appendix II. The Deputy Commissioner stated that IRS generally agreed with our recommendations, and said it would incorporate the recommendations as IRS continues to make improvements to the operating processes and procedures of the whistleblower program. The Deputy Commissioner noted, however, that resource availability could affect the implementation of recommended improvements. He stated that recommended modifications to E-TRAK to more accurately reflect program information will be considered as part of an overall evaluation of E-TRAK adjustments and enhancements, which will begin in the near future, and that IRS would make the appropriate improvements as feasible given resource constraints and competing priorities. Also, the Deputy Commissioner agreed to consider whether time targets for operating divisions are appropriate as part of IRS’s efforts to ensure that subject matter experts’ initial review of whistleblower cases is completed in a timely manner. IRS will consider including additional summary statistical information in its annual report to Congress, but did not specify what information. We acknowledge that resources must be considered when considering improvements to the whistleblower program, but IRS risks not being able to maximize the program’s effectiveness without implementing the recommendations in this report. Collecting more data on review timeliness and outcomes and establishing time targets could help IRS to make more effective decisions on allocating its resources and aid its ongoing program assessment. Congress has expressed concern about the limited data available about the whistleblower program and including more information and data in the Whistleblower Office annual report could improve oversight of, and increase confidence in, the program. As agreed with your offices, unless you publicly release the contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or at whitej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To assess how the Internal Revenue Service (IRS) manages the expanded whistleblower program, including communicating within IRS, we reviewed the Tax Relief and Health Care Act of 2006, which required that IRS establish the Whistleblower Office and administer the expanded award program; reviewed IRS documents on the whistleblower program, including Internal Revenue Manual section 25.2.2, which outlines roles and responsibilities in the expanded whistleblower program; and reviewed GAO’s body of work on internal control standards. We also interviewed staff from the IRS Whistleblower Office, representatives from the three business operating divisions—Small Business/Self Employed, Large Business and International, and Tax Exempt and Government Entities— that handle whistleblower claims, and representatives from other IRS divisions—Chief Counsel and Criminal Investigations—that are part of the whistleblower process. We also spoke with nine attorneys who represent tax whistleblowers to determine the concerns of whistleblowers regarding the length of time the whistleblower claim review process takes. Seven of these attorneys were a nongeneralizable sample of attorneys recommended by IRS as frequent representatives of whistleblowers submitting claims to the whistleblower program. Whistleblower attorneys have a clear financial interest in the outcome of whistleblower claims. However, interviewing them allowed us to obtain broad viewpoints of the IRS whistleblower program while keeping whistleblowers’ identities confidential. To report statistics on whistleblower claims, we analyzed data from the Whistleblower Office’s E-TRAK system. We found that the data generated from E-TRAK on claim status was sufficiently reliable for the purposes of our report. To evaluate how IRS communicates with whistleblowers and the public, we reviewed Internal Revenue Code section 6103, which governs the protection of tax information. We interviewed staff from the IRS Whistleblower Office and the operating divisions and other offices that are part of the whistleblower process. We interviewed the attorneys for their opinions on how IRS communication procedures affect whistleblowers and the processing of whistleblower claims. We also spoke with the National Taxpayer Advocate to identify potential privacy concerns for targeted taxpayers. To determine what lessons, if any, can be learned from IRS’s and whistleblowers’ past experiences with the Whistleblower Office as well as other governmental efforts that could improve the IRS whistleblower program, we identified federal and state programs that were similar to IRS’s whistleblower program. At the federal level, we interviewed officials from programs that provide financial awards for bringing information to the government on specific issues that result in awards paid to whistleblowers. Specifically, we interviewed officials from the Department of Justice, which administers claims made under the False Claims Act; the Incentive Rewards Program at the Centers for Medicare and Medicaid Services; and the new whistleblower programs established under the Dodd-Frank Wall Street Reform and Consumer Protection Act at the Securities and Exchange Commission and the Commodity Futures Trading Commission. We identified states with tax whistleblower reward programs—New York, Florida, and Texas—and interviewed representatives from these programs and reviewed relevant program documents. To identify potential lessons learned from IRS’s past experiences, we spoke with IRS officials and attorneys who represent tax whistleblowers and reviewed academic literature on tax whistleblowers. From these interviews and document and literature reviews, we created a list of options and asked IRS and whistleblower attorneys on their thoughts of the advantages and disadvantages of these options in the context of the IRS whistleblower program. We conducted this performance audit from September 2010 to August 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Jeff Arkin, Assistant Director; Amy Bowser; Jeffrey Niblack; Danielle N. Novak; and Cynthia Saunders made key contributions to this report.
The Tax Relief and Health Care Act of 2006 expanded the Internal Revenue Service's (IRS) whistleblower program, increasing rewards for submitting information on others' tax underpayments to up to 30 percent of collected proceeds. The expanded program targets tax underpayments over $2 million and could reduce the gap between taxes owed and taxes paid. IRS's Whistleblower Office has received over 1,300 submissions qualifying for this new program since 2007. GAO was asked to assess (1) how IRS manages the expanded program, (2) how IRS communicates with whistleblowers and the public, and (3) any lessons from IRS's or other government whistleblower programs that could improve IRS's expanded whistleblower program. GAO analyzed IRS documents and data and interviewed IRS officials, whistleblower attorneys, and federal and state whistleblower program officials. Whistleblower claims can take years to go through the IRS review and award determination process. As of April 2011, about 66 percent of claims submitted in the first 2 years of the program, fiscal years 2007 and 2008, were still in process. According to IRS officials, claims can take years to process because IRS must take various steps to ensure the integrity of claim reviews and resulting taxpayer examinations. Further, taxpayers subject to examination can exercise rights that can add years to the process. IRS does not collect complete data on the time each step takes or the reasons claims are rejected. Without such data, IRS may be unable to identify potential improvements to claim processing efficiency. Furthermore, not all the IRS divisions that review whistleblower claims have time targets for their subject matter expert reviews. Nor does the Whistleblower Office have a systematic process to check in with the divisions about the time taken for their initial reviews. IRS is limited in what information it can share with whistleblowers about the status of claims because of statutes protecting the privacy of tax information. For example, because IRS cannot disclose if it is examining a taxpayer, it cannot inform whistleblowers on the progress of their claims or the reasons their claims are rejected. One mechanism through which the Whistleblower Office can communicate program results is its mandated annual report to Congress. However, the most recently released report, for fiscal year 2010, did not contain information on case processing times or specific data on why IRS rejected claims. Collecting additional data and including it in the report could improve the transparency of the program and Congress's ability to oversee it. Federal and state whistleblower programs have features with potential benefits that could improve IRS's expanded whistleblower program, including options that increase interaction or information shared with whistleblowers and options that attempt to improve the accountability for claim processing. While there are potential advantages to all identified options, it is difficult to determine if the advantages outweigh the disadvantages for many options. Furthermore, IRS would be limited by taxpayer data protections in implementing some of the options. GAO recommends that IRS collect more information--including data on the time each step takes for all claims and reasons for claim rejection--in its claim tracking system, establish a process to follow up on claims that exceed review time targets, and include more information on these issues in its annual reports to Congress. In written comments on a draft of this report, IRS generally agreed with our recommendations.
The cornerstone of federal hiring is its merit basis. Congress has retained the principle of appointment by merit throughout its various amendments and compilations of civil service laws. In enacting the Civil Service Reform Act of 1978, Congress reiterated the importance of merit in hiring by including a merit principle, which requires that “ecruitment should be from qualified individuals from appropriate sources in an endeavor to achieve a work force from all segments of society, and selection and advancement should be determined solely on the basis of relative ability, knowledge, and skills, after fair and open competition which assures that all receive equal opportunity.” OPM is responsible for ensuring that the personnel management functions it delegates to agencies are conducted in accordance with merit principles and the standards it has established for conducting those functions. In January 1996, OPM delegated examining authority, acting under the authority of Public Law 104-52, to federal agencies for virtually all positions in the competitive service. The delegated examining authority requires agencies to conduct competitive examinations that comply with merit system principles, other personnel- related laws, and regulations as set forth in OPM’s Delegated Examining Operations Handbook. Even though the majority of the civilian workforce obtained positions through the open competitive service examination process, certain positions are in the excepted service and are excepted from the competitive examination process. The competitive hiring process, which is described in more detail in appendix I, is shown in figure 2. The number of new hires increased substantially since the mid-1990s— increasing from about 50,000 employees in 1996 to over 143,000 employees in 2002. Hiring in the mid-1990s declined because many agencies were downsizing and did not need to fill positions. With the slowdown in downsizing and the increasing numbers of personnel retiring, agencies are increasingly hiring new employees. Prior to fiscal year 2002, about one- third of all hires were hired by DOD. In 2002, the largest federal hirer was the Department of Transportation, primarily the Transportation Security Administration. Table 2 shows total new hires by department in fiscal year 2002. The federal government’s hiring is expected to continue to increase. In 2003, the President’s budget called for approximately 27,000 additional full- time equivalent federal civilian workers in the executive branch. This follows a 36,000 increase in full time equivalent positions in fiscal year 2002. It is widely recognized both within government and the private sector that the federal hiring process is lengthy and cumbersome and hampers agencies’ ability to hire the people they need to achieve their agency goals and missions. Numerous studies over the past decade by OPM, MSPB, NAPA, the Partnership for Public Service, the National Commission on the Public Service, and GAO have noted problems with the federal hiring process. Our October 2002 survey of HR directors at 24 major departments and agencies indicated that 21 of 24 said that the time needed to fill positions in their agencies was a moderate to very great problem. Moreover, directors at 13 of those agencies reported that the time to hire was a great to very great problem. Our October 2001 survey showed that 22 directors reported that time to hire was a moderate to great problem. Nearly all (22 of 24) of the HR directors we met with said the lengthy and cumbersome hiring process is a major factor that affects or increases the time needed to fill positions. HR directors cited problems with the lengthy hiring process. For example, an HR director of a major federal department noted that thousands of applicants had responded to nationwide openings for a critical occupation at a number of locations. However, because it took so long to manually process the applications, only 1 in 20 of the applicants were still interested in the job when notified that they had been selected. Another HR director noted that many managers, supervisors, and job applicants do not understand the rules and procedures governing federal employment. She said that because of the lack of expertise and complicated process, the agency often loses out in competition with the private sector because of its inability to make timely job offers. Another HR director told us that a significant factor that hampers hiring is the paperwork-intensive hiring process that continues from application, rating and ranking of applicants and production of best qualified lists, through to the “17 forms” that a new hire must complete before being brought onboard. Although, as noted above, nearly all HR directors and others note that the time to hire is too long for most federal hires. Comprehensive department or governmentwide data are not available; however, in fiscal year 2002, OPM compiled and analyzed data on time-to-hire and found that it typically took 102 days for agencies to fill a vacancy using the competitive process. OPM defined the time to hire time frame as the time between when the request to hire or fill a position was received in the HR office to the appointment of an applicant to the position. Additional time might be needed for a manager to obtain approval for the requested hiring action at the beginning of the process or for the new employee to receive a security clearance at the end of the process. Other organizations have noted problems with the lengthy cumbersome federal hiring process. In July 2002, NAPA reported that federal “hiring remains a slow and tedious process.” The report noted that “Many managers are attempting to rebuild a pipeline of entry level employees in this very competitive labor market, yet current hiring methods do not keep pace with the private sector.” In September 2002, MSPB said that the federal hiring process has a number of key problems including “overly complex and ineffective hiring authorities” and “inadequate, time-consuming assessment procedures.” In November 2002, OPM in its strategic plan for 2002 through 2007 stated, “ There is a general perception that our hiring process takes too long and may not provide well-qualified candidates.” In January 2003, the National Commission on the Public Service said, “Recruitment to federal jobs is heavily burdened by ancient and illogical procedures that vastly complicate the application process and limit the hiring flexibility of individual managers.” Not only does the current hiring process not serve agencies and managers well as they seek to obtain the right people with the right skills, but applicants can be dissuaded from public service by the complex and lengthy process. According to a poll commissioned by the Partnership for Public Service, “many people view the process of seeking federal employment as a daunting one. Three-quarters of non-federal workers say making the application process quicker and simpler would be an effective way of attracting talented workers to government.” As many of these and other studies have noted, and as many HR directors noted in our interviews, nearly all parts of the competitive hiring process hamper effective and efficient federal hiring. Key problem areas include the following. Outdated and cumbersome procedures to define a job and set the pay are not applicable to the jobs and work of today. Unclear, unfriendly job announcements cause confusion, delay hiring, and serve as poor recruiting tools. A key assessment tool and hiring programs used for several entry-level positions are ineffective. Convening panels and the manual rating and ranking of applicants to determine best-qualified applicants is time-consuming. Numerical rating and ranking and the “rule of three” limit the choice of applicants and are viewed as ineffective. OPM and the agencies we studied have taken steps to address some of these hiring obstacles. Specifically, five agencies we examined—USGS, Army, Census, ARS, and FS—took systematic and comprehensive approaches that helped to transform their process-oriented hiring systems to ones that are focused on meeting their agencies’ goals and missions. The USGS approach was to focus on automating its hiring process for all of its occupations, except research Senior Executive Service positions, in order to reduce hiring time, increase the number of applicants, and better serve its internal and external customers. Army’s approach was a data-driven approach—Army developed automated tools to identify weaknesses in its hiring process and identified an approach to overcome them, including automation. Census’s approach, in reaction to the need to quickly hire 500 specialists for the 2000 Census, was to work with OPM to jointly develop an automated hiring system for three mission-critical occupations and later to work toward integrating hiring for all its occupations into its parent organization’s automated hiring system. And, as discussed later, OPM also identified hiring improvements as a critical goal in its strategic plan and has a multi-faceted hiring initiative under way. ARS and FS implemented a pilot project that demonstrated a more effective way to rate and rank candidates for positions. The following sections describe each of these problems in more detail and discuss some specific actions under way by agencies and OPM to begin to address the problem. The process of defining a job and determining pay is complex and antiquated, according to HR directors and experts. Defining the job and setting pay must be based on federal job classification standards, which are set forth in the Classification Act of 1949. The classification process and standard job classifications were generally developed decades ago when typical jobs were more narrowly defined and in many cases, were clerical or administrative. However, today’s knowledge-based organizations’ jobs require a much broader array of tasks that may cross over the narrow and rigid boundaries of job classifications. The federal job classification process not only delays the hiring process, but more important, the resulting job classifications and related pay might not match the actual duties of the job. This mismatch can hamper efforts to fill the positions with the right employees. Once management decides to fill a vacant position, or create a new position, the HR office is called upon to see if a position description exists. If a position description does not exist or is not accurate for the vacant position, a position description must be completed. Such a description documents the major duties, responsibilities, and organizational relationships of a job and includes, among others, the knowledge required for the position, supervisory controls, complexity and nature of the assignment, and the scope and effect of the work. Once the job description is complete, the job is classified by matching the duties and responsibilities to the General Schedule requirements. The Classification Act of 1949 provides a plan for classifying positions and sets out 15 grade levels. The law expresses these grade levels in terms of the difficulty and level of responsibility for a specific position. OPM develops standards that must be consistent with the principles in the Classification Act of 1949. The classification system categorizes jobs or positions according to the kind of work done, the level of difficulty and responsibility, and the qualifications required for the position, and serves as a building block to determine the pay for the position. Today’s knowledge- based organizations’ jobs require a much broader array of tasks that may cross over the narrow and rigid boundaries of job classification standards and make it difficult to fit the job appropriately into one of the over 400 occupations. According to a recent OPM study, a key problem with classification is that, under present rules, characteristics such as workload, quality of work, and results are not classification factors. As reported in a January 2003 report of the National Commission on the Public Service, OPM’s director has noted that “continued reliance on this antiquated system is comparable to insisting that today’s offices use carbon paper and manual type writers.” Furthermore, NAPA in its July 2002 report for the National Commission on the Public Service concluded that classification and compensation systems must be based on work and performance rather than position. The NAPA panel recommendations included abolishing the General Schedule and developing a modern system for defining and valuing work, which could help to make the hiring process more results-oriented and efficient. The National Commission on the Public Service recommended that operating agencies need more flexible personnel management systems. The commission recommended abolishing the General Schedule and as a default position, recommended a broadband system under which the 15 pay grades and salary ranges would be consolidated into six to eight broad bands with relatively wide salary ranges. Some agencies have automated the complicated classification process to reduce the time it takes to carry out this task. For example, the Army created a centralized database that gives Army HR managers access to active position descriptions and position-related information to help with the classification process. In addition, OPM has revised the standards for several job series, including health care professions and law enforcement, to make them clearer and more applicable to the current duties and responsibilities of the occupations. But such steps are only partial solutions to the classification issue. OPM points out that the classification standards and process need to be reformed. Changes to the Classification Act of 1949 are needed to make fundamental changes to how jobs are defined and pay is set. Specifically, OPM believes that the time may have come for substantive reform that brings the era of the General Schedule classification system to a close. OPM recognizes the need to maintain the General Schedule in the absence of an alternative and well-managed transition to any new system. Several HR directors we interviewed for this study cited the content of job announcements as a factor that hampered or delayed the hiring process. These HR directors noted that job announcements are frequently incomprehensible and make it difficult for applicants to determine what the jobs require, and therefore do not serve as effective recruiting tools. A February 2000 MSPB study stated that federal job announcements generally appeared to be written for people already employed by the government and that the use of jargon and acronyms is a common problem. The study noted that some announcements were lengthy, difficult to read on-line, and only gave brief or vague descriptions of the duties to be performed. Vague job descriptions make it difficult for applicants to describe how their knowledge, skills, and abilities are related to the job. MSPB also noted that almost no announcements included information on retirement and other benefits, such as vacation time and medical and health insurance, which might entice people to apply. The study recommended that OPM and agencies improve how vacancy announcements are posted on the Internet. The report said making them more visually appealing, informative, and easy to navigate could also make announcements more effective as a recruiting tool. In a December 2002 report on federal vacancy announcements, MSPB reported that its review of the quality of 100 vacancy announcements posted on USAJOBS indicated that 53 percent were poor, 2 percent were good, while 45 percent were judged acceptable. The problems in the vacancy announcements included poor organization and readability, unclear job titles and duties, vague or restrictive qualification standards, and the use of negative language or tone that might deter many qualified candidates. Both agencies and OPM are taking some steps to address this problem. For example, the Department of Health and Human Services rewrote one of its typical vacancy announcements for budget analysts to make it more understandable and appealing to applicants outside the government. Instead of the typical language such as “incumbent is responsible for monitoring the results of budget execution and formulation input from six regional budget offices in coordination with the controller,” the announcement’s language began with “For the energetic individual who wants a challenging career with growth and advancement opportunities, we have positions available that will challenge to you to grow and learn the cutting edge of the nation’s health and human service policy and provide vital information and support required by our policy makers.” In addition, the job announcement was posted on a private sector job search site and in The Washington Post employment section. This approach garnered more than 100 qualified applicants per position, compared to 20 qualified applicants per position under the traditional announcements on USAJOBS Web site. To address unclear job announcements, OPM has initiated an interagency project to modernize federal job vacancy announcements, including providing guidance to agencies to enhance announcements, and instituting a multiprong approach to using e-government technology to assist job seekers and employees governmentwide. Specifically, OPM has improved the Web site to strengthen the job search engine, rewritten the USAJOBS by Phone system to improve speech recognition, and redesigned the way vacancy announcements appear on the Web site. Currently, OPM is seeking contractor support for its USAJOBS to make it easier and quicker for people to find federal jobs and to enhance the site’s “eye-catching” appeal. Several HR directors and human capital experts have found problems with candidate assessment tools, particularly those associated with filling entry- level professional and administrative occupations covered by the Luevano Consent Decree of 1981. In addition, both OPM and MSPB noted in studies that there is a need to develop new assessment tools for occupations and higher-grade levels that are not covered by the Luevano decree that are more efficient and valid predictors of future job performance. Primary responsibility for developing assessment tools rests with the agencies, but frequently agencies do not have the expertise or resources to develop them. In addition to problems found with assessment tools, two hiring authorities set forth in the Luevano Consent Decree —Outstanding Scholar and Bilingual/Bicultural—may not be merit based. Several HR directors we met with and a NAPA study found that the Administrative Careers with America (ACWA) self-rating schedule examination procedure that is currently used to competitively fill most positions covered by the Luevano decree was cumbersome, delayed hiring, and often did not provide quality candidates. The Luevano decree called for eliminating the use of the Professional and Administrative Career Exam (PACE) and required replacing it with alternative examination procedures. The ACWA exam, which was developed by OPM for Luevano positions, was generally administered by OPM to applicants. Agencies entered into reimbursable contracts with OPM to receive lists of candidates who passed the exam. OPM has now delegated authority to administer the ACWA exam to agencies’ delegated examining units. In addition, some exams have been developed to replace ACWA for a few occupations. Agency managers criticized the ACWA examination because they said it is not merit based, according to a NAPA study. The ACWA rating-schedule examination contains 157 multiple-choice questions that distinguish among qualified applicants on the basis of their self-rated education and life experience, rather than on their relative knowledge, skills, and abilities for the vacant position. The study reported that agencies said the ACWA examination is not relevant to specific jobs and occupations and therefore does not result in lists of “qualified individuals … solely on the basis of relative ability, knowledge, and skill”—a key merit systems principle. Consequently, many agencies reported that the primary reason they did not use the ACWA test was their past experience with the quality of the candidates. In a more recent study, NAPA recommended that the ACWA examination system be terminated and agencies be permitted to hire for professional and administrative occupations using techniques that are proven more operationally efficient and effective in meeting diversity shortfalls. Also, MSPB recommended that OPM develop new assessment tools for the occupations covered under the Luevano Consent Decree. HR directors and other officials illustrated numerous problems with the ACWA exam. For example, the Deputy Assistant Commissioner for Human Resources at the Social Security Administration said that the ACWA examination process used for its mainstream entry-level positions—claims representative, computer specialist, criminal investigator, and regional support position—covered by the Luevano Consent Decree is cumbersome, bureaucratic, and labor intensive. In another example, officials of a major military installation said that recruiting accountants and financial managers was hampered by the ACWA examination. They noted that managers believed the test was not an effective screen to identify quality candidates—a theme consistent with the NAPA study. They also pointed out that applicants were “turned off” to federal employment by the lack of relevance of many of the exam questions to the specific jobs for which they were applying. Agencies cited the Outstanding Scholar program as a quick way to hire quality college graduates for positions covered by the Luevano decree. The Outstanding Scholar program and Bilingual/Bicultural program were authorized by the Luevano Consent Decree as supplemental tools to competitive examination. These programs were aimed at addressing the under representation of African-Americans and Hispanics in the workplace. Many HR directors and officials viewed the Outstanding Scholar program as a way to hire quality candidates without getting involved in the complexities of the OPM examination process. However, OPM and MSPB have commented that this is an inappropriate use of the authority. This hiring authority uses both baccalaureate grade point average and class standing as eligibility criteria for appointment. This authority allows candidates who meet the eligibility criteria to be directly appointed without competition and operates without regard to veterans’ preference or the rule of three (see discussion about the rule of three and veterans’ preference later is this report). MSPB has noted, however, that eligibility criteria based on grade point average and class rank are highly questionable as valid predictors of future job performance and that they unnecessarily deny employment consideration to a large segment of the applicant pool who meet basic job qualification requirements. MSPB also has concerns about the Bilingual/Bicultural program because it permits the hiring of individuals who need not be the best qualified and avoids veterans’ preference. This hiring program permits an agency to directly hire an applicant who obtained a passing examination score, without further regard to rank, when the position should be filled by an incumbent with bilingual or bicultural skills and the applicant has the requisite skills. MSPB has also recommended abolishing both the Outstanding Scholar and Bilingual/Bicultural programs because other competitive hiring methods have been more effective in hiring minorities and because they are not merit based. For positions that are not covered by the Luevano Consent Decree, agencies typically examine candidates by rating and ranking them based on their experience, training, and education, rather than testing them. MSPB noted that the government’s interest is not well served if agencies do not have the resources and expertise to make high quality case examining determinations. According to MSPB, agencies use of computer-based assessments is increasing. MSPB notes this has implications for OPM because the validity of computer-based assessments and ranking is critical to ensuring that hiring is based solely on merit. Computer-based assessments would also have implications for category rating systems that are now permitted by the Homeland Security Act of 2002. In general, both OPM and MSPB are concerned about the validity of assessment tools for all occupations and advocate that agencies improve their assessment instruments. Under a largely decentralized approach, agencies’ delegated examining units make decisions on which assessment tools or methods to use and on the development of new assessment tools. However, experts have noted that that there has been a lack of specialized experience in many agencies to develop and maintain valid, effective applicant assessment methodologies. OPM told us that because of budget constraints, it has spent more of its resources on services for which agencies are willing to pay rather than on providing tools that it might have believed to be more valuable, like assessment instruments. OPM also noted that many agencies do not have the technical expertise, funding, or time to develop valid assessment tools. MSPB concluded in a recent report that OPM is a logical organization to which agencies should be able to turn for help in developing valid assessment tools and systems, but is not funded to provide that assistance except on a reimbursable basis. OPM recognizes that it must do more to improve assessment tools. In its fiscal year 2003 performance plan, OPM included a strategic objective that, by fiscal year 2005, governmentwide hiring selections are to be based on comprehensive assessment tools that assess the full range of competencies needed to perform the jobs of the future. A key problem noted by many HR directors is that much of the hiring process is done manually. Among the most frequently cited factors that hampered or delayed hiring were the logistics of convening assessment panels and the time-consuming process of manually rating and ranking job applicants. Twelve agency HR directors we interviewed commented that manually rating and ranking candidates, or the panel process, was a significant cause of delay in hiring. In addition, time-consuming and paperwork-intensive record keeping is needed to document the rationale of assessment panel ratings. Prior to assessing applicants based on their relative merits, agencies must conduct a screening process to determine if applicants meet eligibility requirements (e.g. are U.S. citizens) and the basic or minimum education or work experience qualifications that OPM established for such a position. In a manual hiring system, staff members would have to review all the applications and document why an applicant did or did not meet minimum qualifications. If there are a large number of applicants, carrying out this process can be time consuming. Once the applicants’ eligibility is determined, agencies typically undertake a labor-intensive effort to establish and convene assessment panels and manually rate and rank the candidates based on their relative merits. According to one of the HR directors we met with, the logistics of setting up an assessment panel meeting makes for long delays in the hiring process, in some cases up to 1 month. Some of the delay is due to assembling the appropriate managers and subject matter experts, coordinating their availability, and factoring in the exigencies of other demands and travel time. Once the panel is formed, the panel sorts through all of the applicants’ paperwork, assesses the applicants, and determines a numerical score for each applicant by rating the education and experience described by the applicant against the evaluation criteria in the crediting plan for the position. At this point, any applicable veterans’ preference points are added to the applicants’ score. As mentioned previously, the Homeland Security Act of 2002 permits an agency to use a category rating system that might make assessing candidates less complex and time consuming. Automation has the potential to streamline operations by electronically rating and ranking applicants, or placing them in quality categories, eliminating the need to form assessment panels, and greatly reducing the paperwork burden associated with manual assessments. An automated system creates an easily assessable audit trail so that managers and HR staffs could document their decisions. In addition, an automated system could electronically determine if an applicant met the basic qualifications and electronically notify the applicant of his or her eligibility for the job for which he or she applied. Nineteen of the 24 agency HR directors we met with said they had automated or planned to automate at least a portion of their hiring processes. Some of these agencies have automated or planned to automate the rating and ranking processes. Agencies have used private vendors or have contracted to use OPM’s USA Staffing automated hiring package. USGS automated its hiring system and estimated that it cut hiring time from the close of a job announcement to issuing a job certificate from 30 to 60 days to under 7 days. USGS’s automated system is a computerized employment application processing system, which automates many of the functions and tasks of the competitive examination process. It electronically prescreens applicants and rates and ranks applicants according to specified job-related criteria. This also eliminates the need to convene rating and ranking panels and reduces the paperwork and administrative burden associated with documenting manual rating and ranking. The system also electronically refers the job certificate to the selecting official who has the rating and ranking data, résumés, and other information on his or her desktop, an improvement in efficiency. Furthermore, it makes recruiting data available on-line to authorized staff members. Applicant benefits include user-friendly on-line application and timely feedback on the status of applications. NAPA chronicled the success of USGS’s automated system in a 2001 report. The report notes that 1 year after being implemented, “it is clear that the program is a huge success.” The report lays out the successes based on USGS information to include a significant reduction of processing time a reduction of 1,800 staff days of work, and a nearly tenfold increase in the number of applicants for many of its announcements. Census also automated its hiring process. The impetus for Census to change from its manual hiring system to the automated system for its occupations covering the majority of its ongoing hiring needs—information technology specialist, statistician, and mathematical statistician—was a large number or positions (500 positions) and urgent hiring needed for the 2000 Census. The agency put together a team of managers, human resource staff, and programmers and worked with OPM to automate hiring for these three occupations. In 1998, Census automated their hiring system through OPM for the three occupations. Under this system, OPM posts continuously open vacancy announcements for multiple grade levels. As part of a contract with Census, OPM receives the applications and maintains an inventory of applicants on its system and can rate and rank the applicants and generate a job certificate for Census within 3 days of the request for a certificate. Since there is no closing date for job announcements, many phases of the typical federal hiring process have been completed in advance of a Census request for a certificate. Census managers provide quality-ranking factors to OPM when they request a job certificate. In addition, Census managers have electronic access to information on the applicants because OPM updates Census’s database daily. Census officials told us that additional applicant information collected by recruiters on college campuses provides managers pertinent skill data, which could eliminate personal interviews. Census estimated that time to hire declined from 3 to 4 months to a week or less. For other occupations, Census continues to use its manual competitive examination hiring process to hire people from outside the government. One of the largest obstacles to the federal hiring process mentioned in our interviews with HR directors was the rule of three. Specifically, 15 of the 24 HR directors we met with raised concerns about the negative impact of the rule of three on hiring. Once the panel has rated and ranked the candidates and applied applicable veterans’ preference points, the panel refers a sufficient number of candidates to permit the appointing officer to consider three candidates that are available for appointment. The selecting official is required to select from among the top three ranked candidates available for appointment—this is the rule of three. If a candidate with veterans’ preference is on the list, the selecting official cannot pass over the veteran and select a lower ranking candidate without veterans’ preference unless the selecting official’s objection to hiring the veteran is sustained by OPM. The Homeland Security Act of 2002, enacted in November 2002, now permits agencies governmentwide to use category rating in lieu of numerical ranking and adherence to the rule of three. OPM currently is drafting implementing guidance for this provision. A more complete description of category ranking is included in appendix II. It will be important for agencies to adopt category ranking to improve their hiring processes. Choosing from among the top three candidates is problematic for a variety of reasons. MSPB noted in its study on the rule of three that “the examination procedures underpinning this hiring rule vary in their ability to make fine distinctions among candidates.” Further, veterans’ preference points are added to the imprecise numerical score generated through the panel’s examination process, which can result in veterans being ranked among the top three candidates. The result can be several candidates with the exact same score. When more than three candidates have the same score, examining offices may need to break the tie, usually by electing three of the candidates at random. Since current assessment tools cannot make fine distinctions between applicants, encouraging selection from as many qualified candidates as is reasonable enhances merit-based hiring. The MSPB conducted an in-depth study of the rule of three and its interaction with veterans’ preference. MSPB concluded that given the limits of the examining process to predict future job performance, the curb on the number of candidates from which managers may select does not represent good hiring policy. It also noted that the rule of three’s original purpose was to provide choices. For several years, federal human capital experts said that categorical rating or grouping could provide an alternative to the rule of three methods and expand the number of candidates that a selecting official could choose from while protecting veterans’ preference. Both NAPA and MSPB supported abolishing numerical ranking and the rule of three and replacing them with category rating that would allow officials to select among candidates that were placed in a high-quality category. However, candidates with veterans’ preference placed in the high-quality category would be hired before candidates without veterans’ preference. OPM also supported allowing agencies to use category rating in lieu of numerical ranking and the rule of three. The Department of Agriculture’s ARS and FS tested and implemented category rating in lieu of numerical ranking and the rule of three under an OPM demonstration project. The final 5-year evaluation of the project showed that (1) the number of candidates per job announcement increased, (2) more candidates were referred to managers for selection, (3) hiring speed increased, and (4) there was greater satisfaction with the hiring process among managers. On average, there were from 60 percent (ARS) to 70 percent (FS) more applicants available for consideration under the demonstration project quality grouping procedure than under the standard rule of three and numerical ranking. A higher percentage of veterans were hired in the ARS and about the same percentage of veterans were hired by the FS compared with using the rule of three process. Specifically, at ARS 16.3 percent of all hires were veterans using categorical ranking, while just 9.5 percent were veterans using the rule of three. At ARS, the average length of time to hire was about 25 days quicker than at comparison sites. At FS, the time to hire was quicker, but the difference was not significantly different. Appendix II contains more information on the categorical ranking project carried out by the ARS and FS. As noted previously, the Homeland Security Act of 2002, enacted in November 2002, included a governmentwide provision that OPM or an agency to which OPM has delegated examining authority may establish category rating systems for evaluating applicants for positions in the competitive service. Under this provision a selecting official can select anyone placed in the top category. However, a candidate with veterans’ preference who is placed in the top category could not be passed over by a selecting official unless objection to hiring the veteran is sustained by OPM. OPM is currently drafting guidance to implement this new flexibility. OPM has recognized that the hiring system needs improvement and, as pointed out earlier in this report, is taking a number of actions to address governmentwide hiring challenges. OPM’s current strategic plan includes a major objective to “Increase the effectiveness and efficiency of the Federal hiring process and make Federal employment attractive to high-quality applicants of diverse backgrounds.” To meet this objective, OPM has identified a number of strategies, including reducing regulatory burdens that hamper hiring, increasing recruitment through e-government initiatives, and identifying other governmentwide solutions to improve the hiring process. In addition, last spring OPM announced a hiring initiative that is designed to create momentum for success, build the image of public service, and fix the hiring process. A number of actions have already taken place in the first wave. In July 2002, OPM announced the development of a hiring preferred practices guide and asked agencies to contribute examples of how they had optimized existing hiring flexibilities. Also, last summer OPM held the government’s first “virtual job fair” for information technology workers that demonstrated that critically needed staff could be hired effectively and efficiently. OPM said that in the coming months it will identify other projects and proposals that will address systemic problems associated with the hiring process. It will include deploying competency- based qualifications, improving entry-level hiring, and updating and modernizing exam scoring policy. Our survey of HR directors in the fall of 2001 and then again in the fall of 2002 showed mixed views on whether OPM helped or hindered the hiring process in their agencies. Specifically in 2001, 13 thought OPM helped, 5 thought OPM neither helped nor hindered, and 5 thought OPM hindered their hiring processes. In 2002, 9 thought OPM helped, 9 thought OPM neither helped nor hindered, and four thought OPM hindered the processes. Details of our survey are included in appendix III. HR directors we talked with identified other actions that OPM took to help their departments or agencies improve their hiring processes. These processes included delegation of examination authority, providing human capital expertise, and providing the USAJOBS and USA Staffing programs. The HR directors also identified areas in which OPM could take a more active role. Foremost, agencies said that OPM needed to be a more proactive resource and enhance its role as a “clearinghouse” of information and provide more guidance and better expertise to agencies. Agencies explained that OPM needed to provide information and “best practices” associated with automating the hiring process. They also noted that OPM could do more to address key obstacles in the hiring process, including outdated classification standards and inadequate assessment tools. Improving the federal hiring process is critical as the number of new hires is expected to increase substantially to address the security needs arising from the terrorists attacks of September 11, 2001, and to replace the large number of employees expected to retire over the next few years. Agencies are responsible for maximizing the efficiency and effectiveness of their hiring processes within the current statutory and regulatory framework. Steps toward a higher-level hiring system include using a data-driven approach to identify hiring barriers and ways to overcome them. A key step includes automating the hiring process, which may drive efficiency and reduce the administrative and paperwork burden. Innovative and best practices of model agencies need to be made available to other agencies in order to facilitate the transformation of agency hiring practices from compliance based to one focused on the agencies’ missions. While many improvements to hiring processes can be made by agencies themselves, OPM has recognized that it needs to do more to address some key governmentwide problems. OPM’s hiring initiatives are moving in the direction that will help agencies improve their hiring processes. OPM can assist agencies by helping the agencies to improve and streamline their hiring processes by taking a comprehensive and strategic approach. Consistent with its current efforts to improve the federal hiring process, OPM needs to take a number of specific actions to strengthen federal hiring. Accordingly, as a part of its overall hiring initiative, we recommend that OPM study how to simplify, streamline, and reform the classification process; assist agencies in automating their hiring processes; continue to assist agencies in making job announcements and Web postings more user friendly and effective; develop and help agencies develop improved hiring assessment tools; review the effectiveness of the Outstanding Scholar and Bilingual/Bicultural Luevano Consent Decree hiring authorities. OPM and DOD provided written comments on a draft of this report. Technical comments were provided orally by USGS and via email by Census, ARS, and FS. These technical comments have been incorporated into the report. OPM generally agreed with the conclusions and recommendations in the report. However, OPM expressed several concerns with our methodology. It believes the section on the classification and position description process could be misleading because the majority of jobs are filled without this step. We agree, but note that the more important problem with the classification process is that inaccurate position descriptions and related pay determinations that result from the job classification could hamper efforts to fill the positions with the right employees. OPM also believed that our draft missed an opportunity to hold agencies more accountable for their hiring processes. Throughout the draft, we note that agencies are primarily responsible for their hiring processes and provide concrete examples of what some agencies have done to improve their processes. OPM also provides several examples of actions it is taking to improve the hiring process. Finally, OPM questioned our methodology of meeting with agency HR directors to assess how well OPM is assisting agencies in improving their hiring processes. It believes that chief operating officers would provide a better perspective of agency recruiting and retention issues. While we agree these officials could provide perspective about the results of the hiring process, agency HR directors better understand and are responsible for their agencies’ hiring processes. DOD noted several areas where it believed that OPM needed to do much more to address governmentwide hiring problems. We agree that OPM should do more to improve governmentwide hiring and include several recommendations to OPM. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies of this report to the Chair, Senate Committee on Governmental Affairs, the Chairman, House Committee on Government Reform, the Chairwoman of the Subcommittee on Civil Service and Agency Organization, House Government Reform. We will also send copies to the Director of OPM, the Secretary of the Army, the Secretary of Commerce, the Secretary of Interior, and the Secretary of Agriculture. We will also make copies available to others upon request. In addition, the report will be made available at no charge on the GAO Web site at http//:www.gao.gov. If you have any questions about this report, please contact Edward Stephenson or me on (202) 512-6806. Key contributors to this report are listed in appendix VI. Director, Strategic Issues The Honorable Joseph I. Lieberman Ranking Minority Member Committee on Governmental Affairs The Honorable Daniel K. Akaka Ranking Minority Member Subcommittee on Financial Management, the Budget and International Security Committee on Governmental Affairs The Honorable George V. Voinovich Chairman The Honorable Richard J. Durbin Ranking Minority Member Subcommittee on Oversight of Government Management, the Federal Workforce and the District of Columbia Committee on Governmental Affairs The Honorable Danny K. Davis Ranking Minority Member Subcommittee on Civil Service and Agency Organization Committee on Government Reform The Honorable Dave Weldon, M.D. Federal civil service employees, other than those in the Senior Executive Service (SES) are employed in either the competitive service, 5 U.S.C. § 2102(a), or the excepted service, 5 U.S.C. § 2103(a). The competitive service examination process is one of the processes intended to ensure that agencies’ hiring activities comply with merit principles. This includes notifying the public that the government will accept applications for a job, screening applications against minimum qualification standards, and assessing applicants’ relative competencies or knowledge, skills, and abilities against job-related criteria to identify the most qualified applicants. Federal agencies typically examine or assess candidates by rating and ranking them based on of their experience, training, and education, rather than by testing them. Except as noted before, Title 5 of the U.S. Code requires federal examining offices to give job applicants numerical scores and refer candidates for employment to selecting officials based on their scores. Higher scores theoretically represent greater merit and thus improve candidates’ employment opportunities. In addition, veterans’ preference requires augmenting scores of certain individuals because of military service performed by them or members of their families. The rule of three requires managers to select from among the top three numerically ranked candidates available for appointment. However, if a candidate with veterans’ preference is among the top three candidates, the manager cannot pass over the veteran and select a lower ranked candidate without veterans’ preference unless the selecting official’s objection to hiring the veteran is sustained by the Office Of Personnel Management (OPM). Ensuring that these objectives are met involves several basic steps and the preparation of extensive supporting documentation. Soon agencies will have greater flexibility under the competitive service examination process with the option of using category ranking. The Homeland Security Act of 2002, enacted on November 25, 2002, has a governmentwide provision that will now permit agencies to establish category rating systems for evaluating applicants by placing them in two or more quality categories based on merit. The rule of three does not apply, and selecting officials can select anyone placed in a best-qualified category. However, if a candidate with veterans’ preference is placed in a best- qualified category, the veteran cannot be passed over and must be selected unless the selecting official’s objection to hiring the veteran is sustained by OPM. OPM is currently drafting guidance to implement this legislation. A Department of Agriculture demonstration project carried out by the Agricultural Research Service (ARS) and the Forest Service (FS) demonstrated that category rating, or quality grouping, can provide managers with a larger pool of applicants from which to choose than numerical ranking and the rule of three, while protecting veterans’ preference. ARS and FS believed that the rule of three hampered their ability to hire the people they needed. From 1990 to 1998, ARS and FS carried out the U.S. Department of Agriculture Personnel Management Demonstration Project, authorized by the Office of Personnel Management (OPM). The purpose of the project was to develop a recruitment and selection program for new hires that was flexible and responsive to local recruitment needs. This was the first demonstration project testing a comprehensive simplification of the hiring system for both blue and white- collar federal employees. The project tested the use of category rating as an alternative hiring process. Instead of numerical rating and ranking that required selection from the highest three scorers under the rule of three, under category rating applicants meeting minimum qualification standards are placed in one of two groups (quality and eligible) on the basis of their education, experience, and ability. All candidates in the quality group are available for selection; however, if the quality group contains a veteran, the veteran must be hired unless an objection to hiring the veteran is sustained. If the number of candidates falling into the quality group is inadequate, applicants from the eligible group can also be referred to the manager for selection. As noted before, evaluations of this demonstration project showed it to be effective. Because there was no mechanism in current law to make a demonstration project permanent, innovations that were tested successfully in demonstration projects could not be implemented permanently in the testing agency unless authorized by Congress in special legislation. The demonstration project at the Department of Agriculture was made permanent through legislation in October 1998. As agreed with the requesters and in accordance with discussions with their offices, the objectives of this study were to identify major factors that hamper or delay the federal hiring process; provide examples of innovative practices or approaches used by selected agencies to improve their hiring processes and have the potential to be adapted by other agencies; and identify opportunities for the Office of Personnel Management (OPM), agencies, and others to improve the federal hiring process. We reviewed the practices associated with how the government hires people from outside the government for competitive service positions, including entry-level and higher graded General Schedule positions. We focused our work on the competitive examination process used to fill those positions because that is usually the way that most agencies bring people into their organizations. In addition, we obtained information on special hiring authorities that are frequently used to hire people for entry-level positions and that may supplement the competitive examination hiring process. We did not review in detail how the government fills positions through merit promotions with people who are already employed by the federal government. To identify major factors that hamper or delay the competitive hiring process, we first reviewed our prior work and extant literature on federal hiring. We also interviewed experts and obtained their studies at the U.S. Merit Systems Protection Board (MSPB), a federal agency that hears and decides civil service cases, reviews OPM regulations, and conducts studies of the federal government’s merit system; the National Academy of Public Administration, an independent nonpartisan, nonprofit, congressionally chartered organization that assists federal, state, and local governments in improving their performance; the National Partnership for Public Service, a nonpartisan organization dedicated to revitalizing the public service; and OPM, the federal government’s human resources (HR) agency. We used experts’ findings or observations to augment information we obtained from federal agencies and incorporated them into our report as appropriate. We then reviewed the pertinent laws, Code of Federal Regulations and OPM’s Delegated Examining Operations Handbook that governs the competitive examination hiring process in order to describe how the hiring process works and to later describe what agency human resource directors and studies identified as steps, processes, or regulatory requirements that hampered or delayed hiring. In addition, we reviewed data on hiring contained in OPM’s Central Personnel Data File. Next we gathered information on our three objectives by conducting semistructured interviews with the HR directors of the 24 largest federal departments and agencies. The interviews were conducted from September through December 2001. The open-ended questions were categorized and coded and entered into a database we created. Responses to closed questions on how significant a problem time to hire was were also entered into our database. At least two staff reviewers collectively coded the responses from each of the 24 interviews, and the coding was verified when entered into the database. In addition to these interviews with HR directors, we conducted brief surveys of these 24 directors in both the fall of 2001 and fall of 2002. All 24 HR directors responded to both surveys. During the period between the 2001 and 2002 surveys, 16 of the 24 individuals left their positions. The results of each of these surveys are shown in table 3. In order to provide examples of innovative practices or approaches used by selected agencies to improve their hiring processes and that have the potential to be adapted by other agencies, we conducted a second phase of interviews at five selected agencies from February through November 2002: Department of Agriculture’s Agricultural Research Service (ARS) and Forest Service (FS), U. S. Geological Survey (USGS), U.S. Census Bureau (Census), and Department of the Army (Army). We selected those agencies based on interviews with HR directors across government and discussions with HR experts who noted that these agencies had taken actions to improve their hiring practices. We assessed the role that OPM has played in the hiring process through interviews with HR directors at the 24 largest departments or agencies, experts at MSPB and OPM, and by reviewing expert studies and other information. We provided a draft of this report to OPM, DOD, Census, ARS, FS, and USGS for review and comment. Their responses and comments are discussed at the end of the report. We did our review in Washington D.C., from June 2001 through January 2003 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Office of Personnel Management’s (OPM) letter dated May 6, 2003. 1. OPM questioned our methodology of meeting with agency human resources (HR) directors to assess how well OPM is assisting agencies in improving their hiring processes. OPM believes that chief operating officers would provide a better perspective of agency recruiting and retention issues. While we agree these officials could provide perspective about the results of the hiring process, agency HR directors better understand and are responsible for their agency hiring process and most directly interact with OPM. Agency HR directors are therefore in an excellent position to speak to federal hiring issues and OPM’s leadership. 2. OPM said it was unclear why we identified the five hiring problem areas and also that the quality of hires was not identified as an issue. We identified these areas based on our discussions with human capital and other officials across government and in our review of studies by the Merit Systems Protection Board and the National Academy of Public Administration. Our assessment of these problems considered the impact on the quality of hires. For example, we note in our discussion of the federal job classification process that it not only delays the hiring process for those positions requiring the development of job descriptions, but more important, the resulting job classification and related pay might not match the actual duties of the job. This mismatch can hamper efforts to fill the position with the right employee. We also note that the automated process at the U.S. Geological Survey increased the number of applicants—which increases the likelihood of filling a position with the right person. Finally, in our discussion of the use of the Administrative Careers with America (ACWA) test we note managers’ concerns with the quality of candidates who were referred based on the test results. The recommendation to address this issue was primarily based on the fact that, according to managers, the test was not referring quality candidates. 3. OPM said that our conclusions about the classification process could be misleading. For example, it believes the section on the classification and position description process could be misleading because the majority of jobs are filled without this step. We agree, but note that the more important problem with the classification process is that the existing inaccurate position description and related pay that resulted from the job classification could hamper efforts to fill the position with the right employee. OPM also said that although it agreed that the grade level definitions that underpin the entire classification system are decades old, it has taken steps to revise position classification standards. We note in our report that OPM has and is continuing to revise position standards, but point out that the basic system needs revision. This position is not inconsistent with OPM’s and others’ views of classification. OPM’s white paper on pay notes a key problem with classification is that, under present rules, characteristics such as workload, quality of work, and results are not classification factors. OPM and others conclude that the classification system needs basic revision. 4. OPM points out in its comments that it has taken several steps to assist agencies in improving their vacancy announcements. We recognized many of these actions in our actions under way section and have augmented the section to further outline OPM’s positive steps. 5. OPM had some concerns about our comments about the ACWA test. We noted that managers were critical of the ACWA exam because it was not merit based and it measures life experiences rather than knowledge, skills, and abilities. OPM says the ACWA exam was specifically developed to measure competencies critical to the success of the relevant occupations. We should point out that the ACWA exam is used for more than 100 different occupations. Agency managers we met with and several studies have pointed out that the test does not refer quality candidates. Even though OPM in its comments defends the ACWA exam, it agreed that the test needs to be reevaluated. We recommend that OPM help agencies improve all applicant assessment tools. 6. OPM said that the report misses an opportunity to hold agencies more accountable for the cumbersome hiring process. Throughout the report, we point out that agencies are primarily responsible for improving their hiring processes and include several examples how the agencies we studied in detail took steps to improve various aspects of their hiring processes. These steps could be taken by agencies without any action by OPM. Several of our recommendations to OPM call for actions to assist agencies in addressing their hiring problems. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated April 14, 2003. 1. We have clarified that our report only discusses new hires to the federal government, particularly focusing on the competitive service hiring process. We note that agencies can also fill positions through the internal merit selection process and other intergovernmental methods. 2. The statement that agencies have the primary responsibility for their hiring processes is a fact. Our report outlines several actions that the Office of Personnel Management (OPM) has taken to address many hiring problems. We agree that OPM could do more and have made several recommendations that address that conclusion. 3. DOD noted the lack of progress by OPM in addressing the job classification system and applicant assessment tools. We agree that OPM needs to do more and have included recommendations in that regard. It should be noted that agencies have the primary responsibility to address their hiring problems. Although some problems, such as the job classification system, are outside the control of agencies, others, such as development of assessment tools is within the responsibility and control of the agency. The Merit Systems Protection Board (MSPB) has pointed out that while agencies have the responsibility to develop assessment tools they often do not have the resources to do so. In addition, DOD said that implementing an automated hiring system like the one we describe at the U.S. Geological Survey (USGS) would take up to a decade because DOD is so large and diverse. DOD explains that converting from knowledge, skills, and abilities, to competencies takes a considerable amount of work. Although, USGS officials and we believe, and an independent study indicates that the specific USGS automated system has been successful, we are not endorsing a specific method of automation. Our larger point on this section is that automation can assist agencies with their hiring processes. 4. It is correct that we did not attempt to compare procedures and time lines for hiring before and after OPM delegated examining authority to agencies in 1996. Such a comparison probably would yield little value to today’s discussion of hiring challenges. 5. DOD says the classification system has been studied from every angle without producing significant results and that more study is not needed. We believe that more analysis is needed to determine exactly how to either revise the classification system or develop an entirely new approach to determining job descriptions and pay determinations. 6. DOD asked that we explain why the number of new hires has increased since the mid-1990s. We have added text to the report that explains that hiring in the mid-1990s declined because many agencies were downsizing and did not need to fill positions. We also added that with the slowdown in downsizing and the increasing number of employees retiring, agencies are increasingly hiring new employees. 7. Our draft report had noted that DOD did not respond to our fall 2002 survey of human resources (HR) directors. DOD explained that it responded to our survey of HR directors in November 2002. However, we did not receive itd response until April 2003. We have now included DOD’s response in our analysis of the 2002 HR director responses. 8. DOD points out that OPM has not taken any significant action to address problems related to the Luevano Consent Decree. We agree that the problems with the Luevano Consent Decree need to be addressed and have made a recommendation to OPM to review the effectiveness of the Outstanding Scholar and Bilingual/Bicultural Luevano Consent Decree hiring authorities. 9. DOD notes that examining for Administrative Careers with America (ACWA) positions was not delegated to agencies until October 2002 and that the authority cannot be redelegated to components. We have added this information to our report. 10. DOD noted that we did not analyze the planned actions in OPM’s strategic plan. In several areas, we have outlined actions that OPM is currently taking to address some of the hiring challenges, including some areas specific to actions indicated in OPM’s strategic plan. 11. DOD notes that our report credits OPM with developing new guidance in several human capital areas with no indication of the involvement of agencies. OPM has explained that one of the vehicles it has used to involve agencies is the Human Resources Management Council, an interagency organization of federal HR directors. It should be noted that the recently enacted Homeland Security Act of 2002 establishes an Interagency Chief Human Capital Officer Council, which could replace the Human Resources Management Council. In addition to the persons named above, John Ripper, Tom Beall, Ridge Bowman, Christopher Booms, Karin Fangman, Fig Gungor, Donna Miller, Greg Wilmoth, and Kimberly Young made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
Improving the federal hiring process is critical, as the number of new hires is expected to increase substantially. Federal agencies are responsible for their hiring processes, but must generally comply with applicable Office of Personnel Management (OPM) rules and regulations. Congressional requesters asked GAO to identify federal hiring obstacles, provide examples of innovative hiring practices, and identify opportunities for improvement. To address these issues, GAO interviewed the human resources directors in 24 largest departments and agencies, analyzed the hiring practices of five federal executive branch agencies, and reviewed OPM's role in the hiring process. There is widespread recognition that the current federal hiring process all too often does not meet the needs of agencies in achieving their missions, managers in filling positions with the right talent, and applicants for a timely, efficient, transparent, and merit-based process. Numerous studies over the past decade have noted problems with the federal hiring process. Nearly all of the federal human resource directors from the 24 largest federal agencies told us that it takes too long to hire quality employees. According to data compiled by OPM, the estimated time to fill a competitive service position was typically more than 3 months, with some human resources directors citing examples of hiring delays exceeding 6 months. The competitive hiring process is hampered by inefficient or ineffective practices, including defining a vacant job and pay that is bound by narrow federal classification standards, unclear job announcements, the quality of certain applicant assessment tools, time-consuming panels to evaluate applicants, and the "rule of three" that limits selecting managers choice of candidates. Equally important, agencies need to develop their hiring systems using a strategic and results-oriented approach. GAO studied five agencies that human capital experts identified as having taken steps to improve parts of the hiring process--the U.S. Geological Survey, the Department of the Army, the U.S. Census Bureau, and the Department of Agriculture's Agricultural Research Service and Forest Service. Some of these practices might help agencies across government improve their hiring processes. OPM recognizes that the federal hiring process needs reform and has a major initiative to study the federal hiring process. OPM's efforts will be most effective to the extent to which they help transform agency hiring practices from process focused to mission-focused hiring tools that are more closely integrated into agencies strategic plans.
The ADA is one of the major laws in the statutory scheme by which the Congress exercises its constitutional control of the public purse. Despite the name, it is not a single act, but rather a series of related provisions that evolved over a period of time in response to various abuses. As late as the post-Civil War period, it was not uncommon for agencies to incur obligations in excess, or in advance, of appropriations. Perhaps most egregious of all, some agencies would spend their entire appropriations during the first few months of the fiscal year, continue to incur obligations, and then return to the Congress for appropriations to fund these “coercive deficiencies.” These were obligations to others who had fulfilled their part of the bargain with the United States and who now had at least a moral—and in some cases also a legal—right to be paid. The Congress felt it had no choice but to fulfill these commitments, but the frequency of deficiency appropriations played havoc with the United States’ budget. The Congress expanded the ADA several times throughout the 20th century to require and enforce apportionments and agency subdivisions of apportionments to achieve more effective control and conservation of funds. The ADA contains both affirmative requirements and specific prohibitions, as highlighted below. The ADA: Prohibits the incurring of obligations or the making of expenditures in advance or in excess of an appropriation. For example, an agency officer may not award a contract that obligates the agency to pay for goods and services before the Congress makes an appropriation for the cost of such a contract or that exceeds the appropriations available. Requires the apportionment of appropriated funds and other budgetary resources for all executive branch agencies. An apportionment may divide amounts available for obligation by specific time periods (usually quarters), activities, projects, objects, or a combination thereof. OMB, on delegation from the President, apportions funds for executive agencies. Requires a system of administrative controls within each agency, established by regulation, that is designed to (1) prevent obligations and expenditures in excess of apportionments or reapportionments; (2) fix responsibility for any such obligations or expenditures; and (3) establish the levels at which the agency may administratively subdivide apportionments, if it chooses to do so. Prohibits the incurring of obligations or the making of expenditures in excess of amounts apportioned by OMB or amounts of an agency’s subdivision of apportionments. Prohibits the acceptance of voluntary services, except where authorized by law. Specifies potential penalties for violations of its prohibitions, such as suspension from duty without pay or removal from office. In addition, an officer or employee convicted of willfully and knowingly violating the prohibitions may be fined not more than $5,000, imprisoned for not more than 2 years, or both. Requires that for violations of the act’s prohibitions, the relevant agency report immediately to the President and to the Congress all relevant facts and a statement of actions taken with a copy to the Comptroller General of the United States. The requirements of the ADA and the enforcement of its proscriptions are reinforced by, among other laws, the Recording Statute, 31 U.S.C. § 1501(a), which requires agencies to record obligations in their accounting systems, and the 1982 law commonly known as the Federal Managers’ Financial Integrity Act of 1982, 31 U.S.C. § 3512(c), (d), which requires executive agencies to implement and maintain effective internal controls. Federal agencies use “obligational accounting” to ensure compliance with the ADA and other fiscal laws. Obligational accounting involves the accounting systems, processes, and people involved in collecting financial information necessary to control, monitor, and report on all funds made available to federal agencies by legislation—including both permanent, indefinite appropriations and appropriations enacted in annual and supplemental appropriations laws that may be available for 1 or multiple fiscal years. Executive branch agencies use obligational accounting, sometimes referred to as budgetary accounting, to report on the execution of the budget. The DOD FMR, Volume 14, Administrative Control of Funds and Antideficiency Act Violations, establishes procedures for DOD components in identifying, investigating, and reporting potential ADA violations. The ADA does not prescribe the process for conducting an ADA investigation. Upon learning of or identifying a possible violation of the ADA, an individual should report the potential violation to his/her immediate supervisor within 10 working days. Next, the DOD component appoints an investigating officer to perform a preliminary review of the applicable business transactions and accounting records. The purpose of a preliminary review is to gather basic facts and determine whether a violation may have occurred. The DOD FMR states that the preliminary review should be completed in a timely manner, usually within 90 days. If the investigating officer determines, based upon the results of the preliminary review, that there is a potential ADA violation, the DOD component is required to initiate a formal investigation within 15 days. The purpose of a formal investigation is to determine the relevant facts and circumstances concerning the potential violation, to discern whether a violation actually occurred, and, if so, to determine the cause, the appropriate corrective actions, any lessons learned, and to ascertain who was responsible for the violation. According to the DOD FMR, the DOD component should complete the formal investigation of an ADA violation, including submission of final summary reports to the office of the Under Secretary of Defense (Comptroller), within 9 months of initiating the formal investigation. The OSD, including the DOD Comptroller’s office, then has 3 months to review the DOD component’s final summary report and prepare and submit transmittal letters to the President and the leaders of both Houses of the Congress, with a copy to the Comptroller General. The DOD FMR has established that a preliminary review, formal investigation, and OSD review should be completed within approximately 15 months and 25 days—or about 475 days. DOD’s complex and inefficient payment processes, lack of integrated business systems, and weak internal control environment hinder its ability to control and properly record transactions and to ensure prompt and proper matching of disbursements with obligations, which is essential for the proper recording of transactions. DOD Comptroller and military service financial management and comptroller officials responsible for the department’s ADA programs have stated that because of weaknesses in the department’s business processes and systems, knowledgeable personnel are critical to improving the department’s funds control processes. DOD components responsible for executing the department’s budget, such as the military services, are responsible for ensuring that key personnel within their funds control processes are properly trained to fulfill their responsibilities. The military services have efforts under way to provide classroom or Web-based training to key funds control personnel. However, neither the Navy nor the Air Force has identified specific key funds control personnel who should be trained. Moreover, the Navy and the Air Force financial management and comptroller officials responsible for their military services’ ADA programs could not provide documentation of the processes and procedures each military service has or will utilize to ensure that key funds control personnel are trained. Army Financial Management and Comptroller Office officials responsible for the Army’s ADA program stated that the Army has begun to identify key personnel within the its funds control processes and determine if they have received training. However, the Army officials acknowledged that many of the Army’s key funds control personnel have not received training. As discussed later in this report, we also noted that of the 54 ADA cases reviewed, 20 case files (or over 37 percent) indicated that improved training of key funds control personnel was needed. Without adequate processes, procedures, and controls to (1) identify individuals who are performing key funds control roles, such as funds certifying officials, resource managers, fund holders, certifying officers, contracting officers, program managers, and others, and (2) ensure that they have received the training necessary to fulfill their responsibilities in compliance with the FMR and the ADA, DOD and the military services lack reasonable assurance that these key personnel can reliably prevent, identify, and report ADA violations. Given the numerous documented control risks over funds control, DOD does not have reasonable assurance that it has prevented, identified, investigated, and reported all potential ADA violations. These weaknesses have adversely affected the ability of DOD to ensure basic accountability, maintain funds control, and prevent fraud. For example, we reported in 2005 that after decades of continuing financial management and accounting weaknesses, information related to long-standing unreconciled disbursement and collection activity was so inadequate that DOD was unable to determine the true value of certain disbursement and collection suspense differences that it removed from its records by writing them off. As a result, DOD could not determine whether any of the write-off amounts, had they been charged to the proper appropriation, would have caused an ADA violation. Pervasive business system, process, and control weaknesses acknowledged by the department have hindered DOD’s ability to prevent, identify, investigate, and report ADA violations. Recent reports by the DOD Inspector General indicate that the weak control environment continues to exist today. For instance: In November 2006 and 2007, the DOD Inspector General reported, and the department acknowledged, that it continues to have significant internal control deficiencies that impede its ability to produce accurate and reliable information on the results of its operations. These deficiencies adversely affect the ability of the department’s financial management systems to reliably and accurately record accounting entries and report financial information, such as its fund balance with Treasury, accounts payable, and accounts receivable. For example, DOD made over $22 billion in unsupported adjustments for fiscal year 2007 to force its cost accounts to match obligation information. These weaknesses affect the safeguarding of assets and proper use of funds and impair the prevention and identification of fraud, waste, and abuse. The DOD Inspector General reported in March 2008 that the Air Force and the Defense Finance and Accounting Service did not establish and maintain adequate and effective internal control over Air Force vendor disbursements. The DOD Inspector General noted numerous internal control weaknesses in contract formation and funding, funds control, vendor payment, and financial accounting. According to the report, these weaknesses represent a high risk that violations of laws and regulations not only occurred, but will likely continue to occur if corrective action is not taken. As part of its long-term initiative to address such weaknesses, the department has embarked upon a massive effort to transform its business operations, including financial management. Over the next several years, the department will be spending billions of dollars to implement these systems. However, it will be a number of years before the department’s business system modernization efforts are complete, and as we have previously reported, DOD has encountered challenges in developing systems that meet time frame, cost, and functionality goals. Until DOD can successfully transform its business operations, including implementation of effective business processes, controls, and systems, the department’s ability to ensure proper funds control and compliance with the ADA will continue to be impaired. Until then, mitigating controls, including knowledgeable personnel, will be key to effective funds control. Over the past several years, GAO has made numerous recommendations aimed at improving the department’s business transformation efforts. Generally, the department has agreed with our recommendations and has identified or is planning specific actions to implement our recommendations. As DOD auditors have previously reported, improper disbursements or payments have occurred, in part, because personnel failed to comply with DOD policy and provide accurate and timely information to support the payment of and properly record the transactions. For example, the DOD Inspector General reported in April 2008 that the Mid-Atlantic, Southeast, and South Central Regional Maintenance Centers inappropriately obligated funds on ship maintenance and repair contracts because of ineffective internal controls. As a result, at least $103 million of U.S. Fleet Force Command Operations and Maintenance appropriations were not available for other ship maintenance and repair needs. The knowledge and understanding of DOD regulations and applicable federal laws, such as the ADA, by DOD personnel involved in the obligation, payment authorization, and recording processes are critical to the prevention and detection of ADA violations within the department. Both DOD and military service officials responsible for ADA programs at DOD or the military services have stated the importance of trained and knowledgeable personnel in establishing and maintaining effective funds control. Additionally, the DOD FMR requires DOD components to ensure that appropriate training programs are in place to provide personnel with the knowledge, skills, and abilities needed to perform their funds control duties. However, there is no DOD-wide requirement for DOD components to establish and document that key employees within their funds control processes, such as fund certifying officials, certifying officials, and departmental accountable officers, are identified and have received the appropriate training. Efforts are under way to provide key funds control personnel classroom and Web-based training. Additionally, the Army began an effort in 2006 to identify its fund certifying officials and track their training. While the training of these individuals is critical to improving the Army’s overall funds control process, this training does not include other key individuals, such as certifying officers and departmental accountable officials. Also, neither the Navy nor the Air Force could provide documentation of the processes, procedures, and controls they have or will utilize to ensure that key funds control personnel are trained. Navy and Air Force financial management and comptroller officials acknowledged that their military services currently do not identify and track training of individual key funds control personnel to ensure that they have received the training needed to fulfill their responsibilities in preventing, identifying, and reporting potential ADA violations. Moreover, follow-up with two Navy and two Air Force major commands with reported ADA violations in fiscal years 2006 and 2007 revealed that they could not identify specific key funds control personnel or provide information on the training, if any, they had received. To its credit, in 2006, the Army began an effort to identify key personnel within its funds control process and to begin tracking the training provided. According to a June 2, 2006, memorandum signed by the Assistant Secretary of the Army, Financial Management and Comptroller, “the Army’s ADA portfolio has reached an unacceptable level … reflects negatively on the Army’s financial stewardship.” The Assistant Secretary directed all Army major commands to identify their funds certifying officials and report back as to the number of these personnel who had received training. Based upon the training information it obtained as of June 5, 2008, Army officials acknowledged that their effort to identify key personnel within the Army’s control processes and the training they received had revealed that many of its key funds control personnel had not been trained. According to Army officials, the Army is now working to ensure that this situation is corrected. DOD and the military services have not established processes and procedures to oversee and monitor compliance with DOD FMR provisions requiring the assignment of qualified, trained, and independent ADA investigating officers and the completion of investigations within the prescribed time frames. The DOD FMR establishes procedures for DOD components to assign investigating officers. Specifically, the DOD FMR states that the investigating officer should be chosen from a roster of qualified personnel to ensure that he/she meets all the following qualifications: adequately trained to conduct an investigation of this type, adequate experience in the functional area that is involved with the apparent violation, knowledgeable of financial management policies and procedures, and skilled in investigating potential ADA violations. Each DOD component is responsible for ensuring that its ADA investigating officers are qualified to conduct investigations and have received the required training, as prescribed by the DOD FMR. Training requirements include completion of a fiscal law, or equivalent, course and any additional training on an as-needed basis to ensure that the investigating officer is qualified. To remain qualified to conduct ADA investigations, the DOD FMR also requires that investigating officers receive refresher training every 5 years. Once an individual completes the appropriate training and meets the above-mentioned qualifications, the DOD FMR requires that his/her name be included on a roster of available ADA investigating officers maintained by each DOD component. Further, the DOD FMR states that investigating officers must be independent and capable of conducting a complete, impartial, and unbiased investigation. Finally, the DOD FMR establishes a time frame of approximately 15 months and 25 days for completing an ADA investigation. The following sections highlight specific examples of where the military services did not comply with DOD FMR criteria. Army and Air Force officials informed us that selection of an investigating officer was left to the major command where the violation had occurred and that each major command is responsible for maintaining its own roster. The data required by the DOD FMR to be maintained on the roster of available investigating officers include name, rank/grade, date initial training was received, organization to which assigned, functional specialties, and the number of investigations previously conducted. Collectively, these attributes help substantiate the qualifications, including training and organizational independence, of the person selected to be an ADA investigating officer. For the Army, we requested rosters from the 10 commands that were responsible for selecting the 39 investigating officers who reviewed the 31 Army ADA cases. The Army commands could not provide the rosters, nor could the Army provide documentation of how it ensures that its investigating officers are qualified. Interviews with 3 Army investigating officers and Army financial management and comptroller officials and our analysis of the Army cases disclosed that the Army appoints investigating officers based on functional specialty or work experience. The Air Force uses a roster in selecting 5 of its 12 investigating officers to review the 10 Air Force ADA cases, but could not provide rosters or other documentation regarding how the remaining 7 investigating officers were selected and how their qualifications were determined. The Navy uses a centralized roster to select its investigating officers; however, only 7 of the 15 investigating officers assigned to the 13 closed Navy ADA cases reviewed were selected from the roster. The Navy financial management and comptroller official responsible for the Navy’s ADA program could not provide documentation regarding how the other 8 investigating officers were selected or why they were not selected from the centralized roster. The requirement to use rosters in selecting investigating officers is intended to ensure that investigating officers are selected from a population of predetermined qualified individuals. During our review, Army and Navy financial management and comptroller officials responsible for their military services’ ADA programs stated that the DOD FMR requirement regarding the maintenance and utilization of a roster of qualified individuals to select investigating officers was no longer required. However, follow-up with DOD Comptroller officials responsible for the department’s ADA program refuted the military services’ assertion. The DOD Comptroller officials stated on several occasions that the requirement to use a roster was still valid and not under consideration for revision. The inconsistent manner in which the military services have complied with this requirement raises concerns as to whether DOD or the military services have reasonable assurance that individuals assigned to conduct ADA reviews and investigations are qualified. The military services do not maintain adequate documentation that investigating officers selected to perform ADA preliminary reviews and formal investigations have been properly trained. Based on our review of the closed ADA case files, the rosters, and other documentation provided by the military services, we were only able to determine that 6 of the 66 investigating officers assigned to the 54 ADA cases reviewed had received the required training. Our analysis of available documentation disclosed that only 13 of the 66 investigating officers had received initial training in fiscal law and 10 of the 13 had received required refresher fiscal law training within 5 years of the initial training. Further, our analysis disclosed that only 6 of the 13 investigating officers received all of the required training, including training on how to conduct an investigation. Training requirements for investigating officers outlined in the DOD FMR specify the following: a fiscal law, or equivalent, course; a refresher fiscal law, or equivalent, course within 5 years of initial training in interviewing, gathering evidence, developing facts, documenting findings and recommendations, preparing reports of violation, recommending appropriate disciplinary action, meeting time frames established for the completion of an investigation, and recommending corrective actions. Once an individual completes the required training, the component is to issue a certificate indicating that all of the required courses have been taken, and the individual’s name is added to the roster of individuals deemed qualified to be ADA investigating officers. To remain eligible to conduct investigations, an individual is required to renew his/her certificate every 5 years by attending a refresher training course. While the training requirements for investigating officers are explicit, the military services were not able to provide documentation that clearly indicated that all investigating officers selected to perform the investigations in the 54 ADA cases that were closed in fiscal years 2006 and 2007 had received the required training. Specifically, the Army does not currently track when an investigating officer received initial or subsequent training. As a result, we could not determine if any of the 39 investigating officers used by the Army to complete the 31 ADA cases were properly trained. An Army Financial Management and Comptroller Office official responsible for the Army’s ADA program stated that Army investigating officers are required to read the Army Investigating Officer Handbook. However, the Army official acknowledged that the Army does not have a process or procedure for ensuring that each of its investigating officers actually reads or receives a copy of the manual, which we confirmed through interviews with Army investigating officers. The Army official further stated that the Army is developing an online investigating officer training course that will require participants to pass a test before they can receive course credit. The Army plans to have the course online later this calendar year. Regarding the 12 investigating officers assigned to the 10 closed Air Force ADA cases we reviewed, we could not determine from the rosters or ADA case files if the investigating officers had taken a fiscal law course within the past 5 years. An official within the Air Force’s Financial Management and Comptroller Office responsible for the Air Force’s ADA program stated that fiscal law is incorporated into the Air Force’s Web-based investigating officer training. As a result, the Air Force does not require investigating officers to take an individual fiscal law course. Air Force investigating officers are required to include a copy of the verification of class completion documentation as an attachment to their draft ADA preliminary review or investigation reports. Our analysis of the 10 closed Air Force ADA case files found that only 6 of the 12 investigating officers provided verification that the required training had been completed. The Air Force could not provide documentation that the remaining 6 investigating officers were trained. With respect to the Navy, 15 investigating officers were used in the 13 closed Navy ADA cases we reviewed. Of the 15 investigating officers, the supporting documentation provided by the Navy indicated that only 7 had received initial training in fiscal law and 4 of the 7 had received required refresher fiscal law training within 5 years of the initial training. In June 2008, an official within the Navy’s Financial Management and Comptroller Office responsible for the Navy’s ADA program stated that the Navy was in the process of developing and testing an investigating officer training curriculum of online courses designed to provide investigative officers with the required skills and techniques, such as interviewing witnesses and developing facts and conclusions, as prescribed by the DOD FMR. The Navy plans to have these courses online by the end of this calendar year. Moreover, based on documentation provided, only 10 of the 66 investigating officers assigned to the 54 ADA cases received the required refresher fiscal law training. The FMR does not include a requirement for the roster to document when refresher training was needed or received. Without up-to-date rosters or some comparable method to track the status of training received by potential investigating officers, the military services do not have a process to provide reasonable assurance that the investigating officers appointed to conduct preliminary reviews and investigations of potential ADA violations are properly trained. The military services do not have an established process or procedure for ensuring and documenting that investigating officers do not have any personal or external impairment that would affect their independence and objectivity in conducting ADA reviews and investigations. The DOD FMR states that individuals with no vested interest in the outcome, and who are capable of conducting a complete, impartial, unbiased investigation, shall conduct investigations of potential violations. Additionally, the regulation states that an investigating officer must be chosen from an organization external to the organization being investigated, which the use of a roster, as discussed earlier, is intended to facilitate. The PCIE/ECIE publication, entitled Quality Standards for Investigations, states that “in all matters relative to investigating work, the investigating organization must be free, both in fact and appearance, from impairments to independence; must be organizationally independent; and must maintain an independent attitude.” Further, the standard for independence places the responsibility for maintaining independence upon agencies, investigative organizations, and investigating officers themselves, so that judgments used in obtaining evidence, conducting interviews, and making recommendations will be impartial and will be viewed as impartial by knowledgeable third parties. To maintain a high degree of integrity, objectivity, and independence, organizations should take into account the three general classes of independence—personal, external, and organizational. Table 2 illustrates the types of independence impairments. If one or more of these impairments affects or can be perceived to affect independence, the individual selected to perform the investigation should not be assigned to perform the work. In addition, organizations should maintain documentation of the steps taken to identify potential personal independence impairments. Our analysis of the 54 ADA cases closed for the military services in fiscal years 2006 and 2007 disclosed that the military services focused on organizational independence as a criterion for ensuring investigating officer independence. In 35 of the 54 ADA cases reviewed, the investigating officers were chosen from an organization external to the one under investigation and therefore were determined by DOD to be organizationally independent. The remaining 19 case files lacked documentation as to whether the investigating officers assigned to the case were organizationally independent. Additionally, the military services did not maintain documentation in the case files or through other means to support that they took steps to ensure that the investigating officers assigned to each of the 54 cases were free from personal or external impairments that may have adversely affected their ability to conduct independent and objective investigations. Currently, the DOD FMR does not require documentation of an investigating officer’s independence. Further, the military services do not require documentation of the steps taken to ensure investigating officers’ independence or obtain written assertions regarding independence from the investigating officers. Instead, based on our analysis of the ADA case files and statements made by officials responsible for the military services’ ADA programs, it appeared that the military services assume that an investigating officer is independent if he/she was selected from an organization external to the one that incurred the potential violation. Due to the constant rotational nature of the military, the act of selecting an “external” individual does not by itself provide assurance that the independence standards will be met. As a result, until the military services document the steps taken to confirm independence, they cannot be assured that the investigating officers who conduct ADA investigations are independent. Our analysis of the 54 ADA cases closed for the military services in fiscal years 2006 and 2007 disclosed that the time frame for completing both the preliminary review and formal investigation of a potential ADA violation was generally longer than the time frame specified in the DOD FMR. While the DOD Comptroller tracks the number and identity of overdue formal ADA investigations and issues memorandums to DOD components to follow up on overdue ADA reports, we found that 22 (or over 41 percent) of the 54 closed ADA cases reviewed took longer than 30 months to complete and only 16 of the 54 cases (or about 30 percent) were completed within the 475 days generally required by the DOD FMR. Including the 3 months allotted to OSD to review the final report, the Army took an average of 33 months and the Air Force took an average of 31 months to complete preliminary reviews and formal investigations. We were unable to ascertain the length of time the Navy took to complete its preliminary reviews because the Navy ADA case files did not contain complete information on the preliminary review phase of the investigation. However, we were able to determine that the Navy took on average 17 months to complete the formal ADA investigation phase, including OSD’s review period, for the 13 Navy ADA cases closed in fiscal years 2006 and 2007. The DOD FMR generally requires that this stage of the investigation be completed within 12 months. Military service officials were unable to provide specific reasons why the established time frames were sometimes not achieved other than indicating that each case has its own set of specific circumstances and complexities. To attempt to identify more specific reasons why the time frames were not met, we contacted several investigating officers who indicated that inexperience in performing investigations and other job demands had adversely affected their ability to meet the prescribed time frames. They acknowledged, and DOD Comptroller and military service financial management and comptroller officials concurred, that they were not dedicated to the ADA investigation full-time, but were often required to complete the investigation in addition to their regularly assigned jobs. Without effective oversight and monitoring, neither DOD nor the military services can be certain why a preliminary review or formal investigation is not completed within the allotted time frame and what actions, if any, need to be taken to ensure timely completion of the ADA preliminary review or investigation. The military services provided OSD the required monthly investigation summary information for the 54 ADA cases reviewed. Additionally, for the 34 cases for which DOD concluded that ADA violations had occurred, the violations were reported to the President and the Congress and copies of the reports were provided to the Comptroller General, as required by the ADA. The remaining 20 closed ADA cases were deemed not to have been ADA violations and therefore did not require reporting external to DOD. Additionally, the DOD Comptroller has taken steps in recent years to improve visibility within the department over the ADA investigation process, including preliminary reviews and formal investigations. Further, our analysis of the 34 ADA cases with confirmed ADA violations found that the disciplinary actions taken by the military services were in accordance with the criteria set forth in the DOD FMR. The ADA requires that employees who are responsible for an ADA violation be subject to appropriate administrative action. Within DOD, the FMR specifies that such administrative discipline can range in severity from no action to the termination of the individual’s federal employment. Responsibility for determining what disciplinary action is warranted once it has been determined that an ADA violation has occurred resides with the military service, within established procedures for disciplining civilian and military personnel. DOD reported the results of the 34 ADA cases that it concluded had an ADA violation to the President and the Congress, with a copy of the report to Comptroller General, as required by the ADA and OMB guidance. The act states that reports of violations should include all relevant facts and a statement of actions taken. OMB guidance notes that the report should include the title and Department of the Treasury (Treasury) symbol (including the fiscal year) of the appropriation or fund account, the amount involved, and the date on which the violation occurred; the name(s) and position(s) of the individual(s) responsible for the violation; all facts pertaining to the violation, including the type of violation, the primary reason or cause, and any statement from the responsible individual; the disciplinary action taken; a statement confirming that all information has been submitted to the Department of Justice if it is deemed that the violation was knowing and willful; a statement regarding the adequacy of the system of administrative control prescribed by the head of the agency and approved by OMB and a proposal for a regulation change, if the head of the agency determines a change is needed; statement of any additional action taken by, or at the discretion of, the a statement concerning the steps taken to coordinate the report with the other agency, if another agency is involved. As noted above, each ADA violation case file should contain a statement regarding additional action(s) taken as result of the violation. Of the 34 ADA violations, DOD reported taking corrective action in 33 cases. In 11 of the 33 cases, DOD indicated that improved training of key funds control personnel was needed. In the one case in which no corrective action was identified, the responsible individual was relieved of command. For the 20 ADA cases in which DOD concluded that no ADA violation occurred, we found that 13 of those cases identified corrective actions to be taken by the DOD component. Nine of the 13 cases recommended training of key funds control personnel as a corrective action. Although the ADA reviews and investigations of potential ADA violations recognized the need for the military services to provide key funds control personnel with proper training, as we stated earlier, other than actions initiated by the Army, the military services have not established processes and procedures for ensuring that these important personnel are identified and properly trained. Required internal reporting for formal investigations is more detailed than the reporting requirements outlined by OMB guidance for reporting ADA violations. The DOD FMR requires DOD components to provide, on a monthly basis, specific status information to the DOD Comptroller regarding their ongoing formal ADA investigations. This status information includes (1) case number, (2) status, (3) amount, (4) appropriation and Treasury account symbol, (5) U.S. Code reference, (6) organization where potential violation occurred, (7) location where potential violation occurred, (8) nature of potential violation, (9) date potential violation occurred, (10) date potential violation was discovered, (11) date investigation began, (12) source of potential violation, (13) brief description of potential violation(s), and (14) progress of the investigation/other comments. Our review of the 54 military service ADA cases closed by DOD in fiscal years 2006 and 2007 found that the military services had complied with DOD’s internal reporting requirements. To enhance DOD’s ability to oversee the investigation process, the DOD Comptroller implemented an electronic “dashboard” in 2006 that contains key metrics derived from the monthly status information reported by the DOD components for use in monitoring the status of ongoing formal investigations within the department. According to DOD Comptroller and military service personnel, memos are issued to DOD components to follow up on overdue investigations identified through the “dashboard” metrics. The February 2008 update to the FMR also calls for status tracking of preliminary reviews. DOD components are now required to report information regarding the status of preliminary reviews of potential ADA violations to the DOD Comptroller on the fifth day of each month. While the military services have begun reporting information regarding their preliminary reviews of potential ADA violations to the DOD Comptroller in response to the updated FMR, as of June 17, 2008, none of the military services had reported the full scope of information required by the FMR. Examples of information missing from military services’ reports regarding their preliminary reviews include (1) the means by which the violation was discovered, (2) anticipated dates of completion, and (3) the names and contact information for members of the preliminary review team. The lack of complete reporting information on preliminary reviews hinders DOD’s ability to monitor and oversee the progress of a specific review. Our analysis of the 34 ADA violations reported by DOD as being closed in fiscal years 2006 and 2007 found that disciplinary actions taken were in accordance with the criteria set forth in the DOD FMR and were reported to the President and the Congress, with a copy to the Comptroller General, as required by the ADA. The ADA requires that employees who are responsible for an ADA violation be subject to appropriate administrative discipline. Within DOD, the FMR specifies that such administrative discipline can range in severity from no action to the termination of the individual’s federal employment. Additionally, as established by laws and regulations addressing employee discipline within DOD generally, the specific action that is taken in each case is determined by the employee’s commander or supervisor with the assistance of legal counsel. Table 3 illustrates the disciplinary actions taken by the military services. An explanation of each type of discipline is provided below. No discipline: The individual responsible received no discipline in any form. Based on case file analysis, an individual who was identified as responsible for the ADA violation typically did not receive any discipline if (1) the individual had retired from federal service or (2) the investigation concluded that while the individual was responsible for the violation, he/she had acted in good faith and followed what he/she believed to have been the correct policies and procedures when the violation had occurred. Verbal discipline: The individual named responsible received verbal discipline from supervisor. An example of this form of discipline could include a one-on-one conversation with the individual named as responsible and his/her immediate supervisor, including a discussion of how to prevent future occurrences. Nonpunitive discipline: The individual named responsible received either a memorandum of concern or letter of counseling. An example of this form of discipline could include a written letter of counseling by the named individual’s immediate supervisor in conjunction with completion of required additional training. Formal discipline: The individual named responsible received written reprimands, reassignment or removal, suspension, or an unfavorable evaluation. Examples of formal discipline, based on our review of case files, could include dismissal from federal service and removal from current position. We did not assess the appropriateness of disciplinary actions imposed in any of the cases reviewed. Given the numerous documented control risks over funds control, DOD does not have reasonable assurance that it has prevented, identified, and investigated all potential ADA violations. DOD’s successful completion of modernizing its business operations, including systems, processes, policies, and controls, is critical to helping reduce the department’s risk of ADA violations. DOD and the military services’ stated intention to rely on better training of key funds control personnel as an interim action for preventing and detecting ADA violations was not supported by actions taken at either the department or military service levels. Specifically, other than an effort by the Army to identify funds certifying officials, the military services had not identified key individuals within their funds control processes and ensured that they had received training needed to fulfill their responsibilities in preventing, identifying, and reporting potential ADA violations. The lack of adherence at the departmental and military service levels to the department’s established qualifications, training, and independence requirements for investigating officers undermines the reliability of the investigation process. Although DOD has taken steps to improve visibility over the investigation process, additional actions are needed to improve its ability to prevent, detect, investigate, and report on ADA violations. To improve management and oversight of preliminary reviews and formal investigations of potential ADA violations, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller) to take the following two actions to update the department’s FMR to require that (1) ADA case files document that the investigating officer(s) selected to conduct a preliminary review or formal investigation is(are) free of personal, external, and organizational impairments and (2) DOD components maintain documentation of the date by which an investigating officer must receive refresher training in order to remain qualified to perform ADA reviews and investigations. Additionally, we recommend that the Secretary of Defense direct the Secretary of the Army, the Secretary of the Navy, and the Secretary of the Air Force to take the following four actions: (1) implement and document processes, procedures, and controls to identify and help ensure that key funds control personnel, including funds certifying officials, are properly trained so that they can fulfill their responsibilities to prevent, identify, and report potential ADA violations; (2) implement and document processes, procedures, and controls to oversee and monitor compliance with DOD FMR provisions requiring the maintenance and use of a roster for selecting qualified ADA investigating officers; (3) develop, implement, and document policies and procedures to help ensure compliance with the DOD FMR requirements for investigating officer training; and (4) develop, implement, and document policies and procedures to help ensure compliance with the DOD FMR requirements for investigating officer independence. We received written comments from the Acting Deputy Chief Financial Officer, which are reprinted in appendix II. DOD concurred with our recommendations and identified specific actions it has taken to implement these recommendations. On September 5, 2008, the department issued a memorandum to the Assistant Secretaries (Financial Management and Comptroller) of the Army, the Navy, and the Air Force, as well as other activities within the department, which detailed new requirements in the areas we recommended. The memorandum noted that the policy changes identified, which were effective immediately, would be included in the next update to the department's Financial Management Regulation. More specifically, the memorandum requires DOD components to document processes, procedures, and controls used to identify key fund control personnel, including fund certifying officials; train those individuals in appropriations law; validate that the individuals have received appropriations law training within the last 5 years; or a combination of these. The memorandum also notes that DOD components must require that these individuals attend a refresher appropriations law course every 5 years. In addition, the memorandum directs DOD components to retain documentation in each ADA case file that supports that ADA investigators are qualified; trained; and free of personal, external, and organizational impairments. Furthermore, these documents must be provided to the Office of the Under Secretary of Defense (Comptroller) and the Deputy Chief Financial Officer when a formal investigation is initiated. Each DOD component must also implement and document processes, procedures, and controls to oversee and monitor the maintenance and use of a roster for selecting qualified ADA investigators and establish a date by which each investigator must receive required refresher training. We will send copies of this report to interested congressional committees, the Secretary of Defense, the Secretary of the Army, the Secretary of the Navy, the Secretary of the Air Force, and the Under Secretary of Defense (Comptroller). We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http:/www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-9095 or rasconap@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To determine whether existing Department of Defense (DOD) funds control systems, processes, and internal controls provide reasonable assurance that potential Antideficiency Act (ADA) violations will be prevented and detected, we interviewed DOD Comptroller officials and reviewed GAO and DOD Inspector General audit reports, including financial-related reports; performance reports; and reports specifically related to the ADA, compliance with laws and regulations, or both. Specifically, we reviewed prior GAO reports related to DOD business transformation; High-Risk Series reports; and business modernization, financial management, and problem disbursements reports. These reports document long-standing weaknesses related to funds control, a key element in being able to prevent or detect an ADA violation. DOD has acknowledged the financial management weaknesses reported by GAO and DOD auditors and the impact these weaknesses have on the reliability of the department’s financial information. As a result, we did not perform additional work to substantiate the condition of DOD’s financial management environment and internal controls. We reviewed the DOD Financial Management Regulation (FMR), Volume 14, Administrative Control of Funds and Antideficiency Act Violations, to determine what controls and procedures had been established to help preclude ADA violations and prevent future occurrences of violations. In addition, we reviewed Office of Management and Budget (OMB) Circular No. A-11, Preparation, Submission, and Execution of the Budget, section 145, “Requirements for Reporting Antideficiency Act Violations.” We interviewed DOD Comptroller and military service officials responsible for ADA programs at DOD or the military services to identify appropriate training for key funds control personnel and to obtain an understanding of the processes, procedures, and controls to ensure that key funds control personnel receive training. To determine whether preliminary reviews and formal investigations of ADA violations are processed in accordance with applicable DOD regulations and criteria related to qualifications, training, independence, and timeliness of investigations, we reviewed applicable policies, procedures, and guidance contained in the DOD FMR. Additionally, we reviewed the President’s Council on Integrity and Efficiency/Executive Council on Integrity and Efficiency, Quality Standards for Investigations, to obtain an understanding of qualification, independence, and due professional care standards and criteria applicable to investigations. We also reviewed all 54 ADA cases for the military services that were closed by DOD for fiscal years 2006 and 2007 and available military service rosters from which the investigating officers assigned to these cases files were or should have been chosen, and interviewed appropriate agency officials from the Army, the Navy, the Air Force, and the DOD Comptroller’s Office responsible for ADA programs at DOD or the military services to determine how qualifications, training, and independence are ensured and documented. To assess investigating officers’ qualifications, we focused our review on whether the investigating officers had received training and if there was an internal control in place to ensure that the investigating officers did not have any personal, external, or organizational independence impairments to their ability to conduct an investigation. We did not verify their fields of specialty or areas of expertise. To determine whether DOD tracks and reports metrics on preliminary reviews and formal investigations of ADA violations, we reviewed applicable policies, procedures, and program guidance contained in the DOD FMR. We also obtained and analyzed military service metrics for preliminary reviews and formal ADA investigations to ascertain whether the 54 ADA cases were completed within the time frames established by the DOD FMR. We also obtained and reviewed the DOD FMR, Volume 14, Administrative Control of Funds and Antideficiency Act Violations, chapter 6, “Status Reports on Investigations,” to determine reporting criteria. With respect to disciplinary actions taken, we analyzed the 34 ADA case files in which DOD had concluded that an ADA violation had occurred to identify the disciplinary action taken. We compared the disciplinary action documented in each case file to the criteria set forth in the DOD FMR. We did not assess the appropriateness of the conclusions reached by DOD for the 54 closed ADA cases or the disciplinary actions taken in the 34 cases for which DOD concluded that an ADA violation had occurred. The listing of 54 closed ADA cases for fiscal years 2006 and 2007 was obtained from the DOD Comptroller. We compared the listing of 54 closed ADA cases to information maintained by our Office of General Counsel on ADA cases reported to the President and the Congress, as filed with the Comptroller General, to ascertain the completeness and accuracy of the DOD listing. Our comparison and subsequent follow-up with the DOD Comptroller’s Office, found that 34 of the 54 ADA cases investigated by DOD concluded that an ADA violation had occurred and were reported as required by law. For the remaining 20 ADA cases, DOD concluded that no ADA violation had occurred, and therefore these cases were not reported externally. We reviewed each case file to ensure that it contained the information set forth in the OMB circular and the DOD FMR. We conducted our work at the Office of the Under Secretary of Defense (Comptroller), the Financial Management and Comptroller Offices of the Army, the Navy, and the Air Force, and 12 military service major commands. We conducted this performance audit from July 2007 through September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We received written comments from the Acting Deputy Chief Financial Officer, which are reprinted in appendix II. In addition to the contact named above, the following individuals made key contributions to this report: Darby Smith, Assistant Director; Evelyn Logue, Assistant Director; F. Abe Dymond, Assistant General Counsel; Lauren Catchpole; Francine DelVecchio; Jamie Haynes; Wil Holloway; Kristi Karls; Jason Kelly; Jason Kirwan; and Sandra Lord-Drakes.
Senate Report No. 110-77 directed GAO to review the Department of Defense's (DOD) procedures for Antideficiency Act (ADA) violations. GAO focused on whether (1) existing DOD funds control systems, processes, and internal controls provide reasonable assurance that ADA violations will be prevented or detected and whether key funds control personnel are trained; (2) investigations of ADA violations are processed in accordance with applicable DOD regulations; and (3) DOD tracks and reports metrics pertaining to its ADA investigations and what disciplinary actions are taken when ADA violations have occurred. GAO's review included all 54 ADA military service case files closed in fiscal years 2006 and 2007. GAO did not assess the appropriateness of the conclusions reached or of the disciplinary actions taken for the ADA cases. DOD's complex and inefficient payment processes, nonintegrated business systems, and weak internal controls impair its ability to maintain proper funds control, leaving the department at risk of overobligating or overspending its appropriations in violation of the ADA. DOD Comptroller and military service financial management and comptroller officials responsible for the department's ADA programs have stated that because of weaknesses in DOD's business operations, knowledgeable personnel are critical to improving the department's funds control, and these officials have or are developing training courses. However, only the Army has attempted to identify and determine whether key funds control personnel have received appropriate training to provide them with the knowledge and skills to fulfill their responsibilities, including the ADA, required by DOD regulations. GAO's analysis of the 54 ADA cases and other documentation provided by the military services disclosed that the military services did not fully comply with DOD regulations intended to ensure that ADA reviews and investigations were conducted by qualified and independent personnel and were completed in a timely manner. More specifically, GAO found the following: (1) Only 6 of the 66 investigating officers assigned to the 54 ADA cases reviewed had received all of the required training. (2) Nineteen of the 54 ADA cases lacked documentation needed to determine whether the investigating officer was organizationally independent. Further, because the military services focused on organizational independence, they could not be assured that investigating officers were free of personal or external impairments to independence. ? ADA investigations were generally not completed within the 15 months and 25 days set forth by DOD. Of the 54 ADA cases reviewed, 22 cases took over 30 months to complete and only 16 were completed on time. GAO also noted that DOD, as required, reported the 34 cases in which it had concluded that an ADA violation had occurred to the President and the Congress, with a copy to GAO. For the remaining 20 cases, DOD concluded that an ADA violation had not occurred and therefore external reporting was not required. Further, DOD has taken steps to improve transparency over the ADA investigation process by requiring DOD components to report status information when an ADA investigation is initiated. Additionally, for the 34 ADA cases in which DOD concluded that an ADA violation had occurred, the nature of the disciplinary actions taken and reported to the President and the Congress was consistent with the criteria set forth in the DOD regulations. The ADA requires that employees who are responsible for ADA violations be subject to appropriate administrative discipline. The DOD regulations specify that administrative discipline can range from no action to the termination of the individual's employment.
CDC estimates that contaminated food causes 76 million illnesses, 325,000 hospitalizations, and 5,000 deaths in the United States each year. On the basis of the number of confirmed outbreaks of foodborne disease in 1997, the latest year for which CDC’s data are available, seafood is one of the leading causes of foodborne illness outbreaks in the United States. Seafood products represented about 15 percent, or 26, of the 169 foodborne illness outbreaks from a confirmed source—a level greater than that associated with meat or poultry products, which are consumed at 8 and 6 times the rate of seafood, respectively. However, as we reported in 2001, CDC officials said that foodborne illness outbreaks are generally underreported and that it is easier to identify the source of some diagnosable illnesses, such as scombroid poisoning from seafood, than illnesses that result from nonspecific gastrointestinal symptoms caused by other foods. Moreover, the actual number of individual cases of illnesses resulting from traced outbreaks were higher for meat and poultry (619 and 353 cases, respectively) compared with 108 cases for seafood. FDA stated that seafood outbreaks may have involved fewer individual cases of illness because seafood has much lower consumption rates than meat and poultry. FDA also noted that some seafood-related illnesses may be caused by recreational or subsistence fishing over which the federal government has little or no control. The Center for Science in the Public Interest, a consumer interest group that works on nutrition and food safety issues, has used CDC data and other sources to track the number of reported food- poisoning outbreaks in the United States and estimates that seafood was responsible for 18 percent of the outbreaks of foodborne illnesses that the center tracked between 1990 and 2002. Several types of hazards can cause seafood-related illnesses. Specifically: Biological hazards—include pathogens, such as Clostridium botulinum, Listeria monocytogenes, Salmonella species, and Staphylococcus aureus, and parasites such as roundworms and tapeworms. Chemical hazards—include compounds such as methylmercury, which can cause illness from long-term exposure; residues from drugs unapproved for use in food animals, such as chloramphenicol and nitrofurans, or overuse of approved drugs that are sometimes used in aquaculture production, which may be carcinogenic, allergenic, and/or cause antibiotic resistance in humans; and marine toxins. According to FDA officials, two marine toxins with potentially serious health effects—scombrotoxin and ciguatoxin—cause most of the reported seafood-related illnesses, including gastrointestinal and neurological problems. These toxins are heat resistant and cannot be inactivated by cooking. Physical hazards—include foreign objects in food that can cause harm when eaten, such as glass or metal fragments. Figure 1 shows the steady growth in U.S. consumption and imports of seafood between 1993 and 2002. According to data from the NOAA’s National Marine Fisheries Service, the United States imported about 4.2 billion pounds, or more than 80 percent, of its seafood in 2002, as shown in the figure. In addition, U.S. seafood consumption rose about 25 percent between 1980 and 2002, from 12.5 pounds per person to 15.6 pounds per person. Most seafood consumed in the United States is imported from an estimated 160 countries and 13,000 foreign processors. In 2002, the top 6 seafood exporting countries—Canada, China, Thailand, Chile, Ecuador, and Vietnam—accounted for approximately 63 percent of imported seafood. Imported products include fresh and frozen tuna and salmon as well as crustaceans, such as shrimp and lobsters. Figure 2 shows the proportion of imports to the United States from the 6 leading exporting countries. A large and rapidly growing proportion of worldwide seafood production, including U.S. imports, is produced by aquaculture. In 2000, aquaculture represented about 27 percent of global seafood production, and has increased by an average of 9.2 percent annually since 1970, compared with only an average 1.4 percent increase for captured seafood, according to the Food and Agriculture Organization of the United Nations. As in other animal production systems, aquaculture producers may use antibiotics and other chemicals to prevent or treat disease. Some producers have been found to misuse approved drugs or to use unapproved drugs or chemicals that pose potential human health hazards, such as antibiotic resistance, allergic reactions, or cancer. In recent years, food safety authorities in Europe, Canada, and the United States have begun to detect these substances and are taking steps to control their illegal use. FDA is responsible for ensuring the safety of both domestic and imported seafood under the Federal Food, Drug, and Cosmetic Act. In 1997, following recommendations by the National Academy of Sciences and others, FDA adopted a program of preventative controls that are designed to identify hazards during the seafood-production process and minimize the risk of contamination. The HACCP regulations made seafood- processing firms responsible for identifying harmful microbiological, chemical, and physical hazards that are reasonably likely to occur and for establishing critical control points (CCP) to prevent and reduce contamination. The HACCP system is based on the following seven principles that each seafood firm must address: Conduct a hazard analysis. Identify hazards that are reasonably likely to occur. Identify the CCP. Identify a point, step, or procedure in the production process where controls can be applied to prevent, eliminate, or reduce to an acceptable level a food safety hazard that is reasonably likely to occur. Establish critical limits for each CCP. Set the maximum or minimum value at which parameters, such as cooking time and temperature, must be controlled at each CCP to prevent, eliminate, or reduce the hazard to an acceptable level. Monitor each CCP. Establish monitoring activities that will ensure that the process is under control at each CCP. Establish corrective actions. Define actions to be taken when monitoring discloses a deviation from established critical limits. Establish verification procedures. Establish verification procedures to ensure that HACCP plans accomplish their intended goal—that is, ensuring the production of safe products. Establish record-keeping and documentation procedures. Maintain documentation, including the HACCP plan; CCP monitoring; corrective actions; and verification activities. Under the HACCP regulations, seafood-processing firms are responsible for conducting a hazard analysis and for developing and implementing HACCP plans for hazards that are determined to be reasonably likely to occur. These hazards may include marine toxins, microbiological contamination, chemical contamination, pesticides, drug residues, decomposition in certain species, parasites, the unapproved use of food or color additives, and physical hazards. For each hazard identified, the firms must establish CCPs to prevent or reduce contamination. Firms also must establish and monitor sanitation procedures to ensure, among other things, the (1) general cleanliness of food contact surfaces, including utensils, gloves, and outer garments, and (2) control of employee health conditions. As we reported in 2001, FDA has four approaches to verify compliance with HACCP regulations and ensure the safety of imported seafood. FDA has the authority to enter into voluntary agreements with individual countries on the basis of a determination of equivalence of their seafood safety systems with U.S. HACCP requirements. First, under the provisions of the World Trade Organization Agreement on the Application of Sanitary and Phytosanitary Measures, to which the United States is a signatory, FDA is obligated to enter into consultations with the aim of achieving equivalence agreements upon the request of other World Trade Organization member nations. FDA considers other systems to be equivalent when it finds one or more of an exporting country’s food safety measures—such as laws, regulations, guidance, and procedures—to be equivalent to our own. U.S. importers can demonstrate HACCP compliance by acquiring seafood from countries with these agreements. Second, in the absence of such agreements, importers are responsible for demonstrating, through documentation, that the seafood they import into the United States is produced under systems that are compliant with U.S. HACCP requirements. During its periodic inspections, FDA reviews this documentation to determine whether importers have met their responsibilities under the HACCP regulations. Third, FDA also inspects a limited number of foreign seafood firms to determine the firms’ compliance with HACCP. Lastly, FDA selects a small number of individual shipments at U.S. ports of entry to conduct visual examinations and/or collect and test samples to determine if the seafood is misbranded or adulterated. FDA commented that detaining suspect imported seafood for physical or laboratory examination by the importer is also part of its import control strategy. If FDA observes HACCP violations during its inspections and testing, it can take several regulatory actions. For example, FDA issues warning letters in cases where violations raise safety concerns that may lead to enforcement action, such as detention, seizure, or injunction—which is a court order to refrain from distributing a product. In the case of foreign firms, a warning letter could advise them of a forthcoming detention, the only enforcement action that is available. Firms that receive warning letters are asked to respond to FDA in writing to indicate what actions they will take to correct the identified problems. To fund FDA’s food safety programs, Congress provided $393 million for fiscal year 2002. This amount represents a $106 million increase over FDA’s budget for fiscal year 2001, including a $93 million supplemental appropriation for counterterrorism activities, including those in the Bioterrorism Act of 2002. FDA used some of this increase to enhance its coverage of imported foods, including hiring over 600 new food safety investigators and laboratory personnel; increasing the number of port-of- entry examinations and laboratory testing; and conducting foreign inspections that focused on high-risk foods, including seafood. Since our January 2001 report, FDA has made improvements to three of the four approaches it uses for ensuring the safety of imported seafood— importer inspections, foreign inspections, and port-of-entry inspections. FDA has not implemented either of the recommendations we made in our 2001 report regarding establishing equivalence agreements with exporting countries or communicating deficiencies found during inspections to FDA’s port-of-entry personnel. Additionally, FDA continues to experience long delays in issuing warning letters or detaining imported seafood at U.S. ports of entry after investigators find serious deficiencies. By not taking timely regulatory action, FDA increases the likelihood that unsafe seafood will enter the U.S. market. We found that FDA has made some progress in strengthening the efficacy of three approaches for ensuring the safety of imported seafood. However, the agency has made no progress regarding the development of equivalence agreements with seafood exporting countries. Figure 3 summarizes the changes that have taken place in FDA’s seafood safety program. As we reported in 2001, in the absence of equivalence agreements, U.S. seafood importers are required to maintain written product specifications and take at least one of six affirmative steps to document foreign firms’ compliance with U.S. requirements. Figure 4 shows the regulatory requirements for importers and the documentation that importers can use to demonstrate compliance. While importers have made some progress in maintaining the required documentation, they are still far from full compliance, according to our analysis of FDA’s inspection forms for fiscal year 2002. Specifically, on the basis of our random sample of inspections, we estimate that importers had the required documentation for 48 percent of the products they imported, which is up from the 27 percent noted in our 2001 review. That is, an estimated 48 percent of imported seafood products listed in the FDA inspection forms contained (1) a written product specification document and (2) documentation for at least one of the six possible affirmative steps required by the regulations. In fiscal year 2002, FDA inspected fewer domestic importers—402 (of an estimated 8,500) compared with 644 that the agency reports it inspected in fiscal year 1999. Our analysis shows that FDA investigators made some errors when documenting these 2002 inspections. On the basis of our survey, we estimated that in about 4 percent of the inspection forms, FDA investigators erroneously indicated that the exporting country had an equivalence agreement in place for seafood. Therefore, they did not require the importer to produce the additional documentation required in the absence of an equivalence agreement (written product specifications and at least one affirmative step). FDA officials said the oversight occurred because the investigators had correctly determined that the importers received products from firms on a list of preferred providers developed by the Canadian Food Inspection Agency (CFIA), but the investigators erred in assuming that having the preferred provider list meant that Canada has an equivalence agreement with the United States. FDA officials said they will take steps to clarify the requirement with field personnel to avoid confusion in the future. FDA also increased the number of foreign countries visited and seafood firms inspected since we last reported in 2001. FDA visited 13 of an estimated 160 countries in fiscal year 2002 to provide education on the U.S. HACCP requirements and to inspect 108 of about 13,000 seafood firms compared with 4 countries and 37 firms inspected in fiscal year 1999. FDA selects the countries for inspection on the basis of previous compliance problems, the volume of seafood exported to the United States, and the type of product and associated risk. Once it selects a country, FDA selects foreign firms that have a problematic compliance history and works with the country’s inspection authority to identify other firms for inspection. According to the Director, FDA’s Office of Seafood, FDA plans to inspect about 100 seafood firms in 10 or more foreign countries annually in the future. Although this number represents fewer firms and countries than FDA inspected in 2002, it represents more than FDA inspected in fiscal year 1999. These inspections tend to be targeted on developing countries that are major exporters to the United States. FDA officials also said they have begun to increase laboratory testing of imported seafood, in particular for aquaculture drug residues, as a result of the increase in staff resources the agency received from the Bioterrorism Act of 2002. According to these officials, in fiscal year 2002, FDA had 310 full-time-equivalent positions for inspections and laboratory testing of all food, with 70 allocated for imported seafood; by fiscal year 2004, FDA estimates that it will have 681 positions, with at least 103 allocated for imported seafood. Furthermore, the proportion of foreign seafood products detained for laboratory testing increased slightly, from less than 1.0 percent in fiscal year 1999 to about 1.2 percent in fiscal year 2002, while imported seafood products increased by 13 percent (from 3.7 to 4.4 billion pounds) over the same period. FDA officials expect laboratory testing to increase to about 1.4 percent of imported seafood products in fiscal year 2004, after the newly hired investigators and laboratory personnel are fully trained. Although FDA stated in January 2001 that it planned to make progress toward accomplishing foreign equivalence assessments and had listed this goal as one of its priorities, the agency has not made progress in this regard. As a result, FDA still has no equivalence or other agreements with any seafood exporting country. At the time of our 2001 report, FDA had not established any equivalence agreements with countries that export seafood to the United States. However, the agency was discussing equivalence agreements with Australia, Canada, and New Zealand and a compliance agreement with Japan. To expedite development of these agreements, we recommended that FDA develop specific goals and time frames for completing them. FDA did not agree with this recommendation, but it stated that accomplishing foreign equivalence assessments would be one of its priorities for fiscal year 2001. FDA officials now state that developing these agreements is no longer a priority because of several factors. First, they point out that equivalence agreements, as such, do not necessarily contribute to the enhanced safety of imported seafood. Foreign producers are already required to produce seafood products under a HACCP-based system that provides for a high level of assurance of safety, and therefore, an FDA finding of equivalence of a foreign seafood regulatory program or individual seafood safety measures would be unlikely to substantially improve the safety of imported seafood. Second, FDA officials said that the United States does not require a finding of equivalence as a condition for exporting seafood to the United States. Third, the procedures and criteria that are necessary to conduct equivalence assessments have only recently been agreed upon at the international level by the Codex Alimentarius Commission. FDA is working with other U.S. agencies in considering how best to incorporate these international guidelines in situations where equivalence assessments might be helpful for either public health protection or trade facilitation. The Office of the U.S. Trade Representative (USTR), the cabinet agency responsible for developing and coordinating U.S. international trade policy, generally agreed with this view and also said that even with equivalence agreements, FDA would still be required to conduct compliance reviews and audits in these countries. Finally, both FDA and USTR said the time and resources required to develop equivalence agreements for seafood may outweigh the benefits. We agree that establishing equivalence agreements would not automatically result in improved seafood safety. However, by establishing agreements with countries that are able to demonstrate that their safety systems are comparable to ours, FDA could free inspection resources and allow more extensive examination of seafood products from countries with less advanced systems. Because FDA does not have equivalence agreements with countries that are exporters of seafood to the United States, FDA principally relies on a review of documentation at importers’ offices to attempt to determine whether importers have met their responsibilities and requirements under the seafood HACCP regulations. As we previously discussed in this report, FDA reported inspecting only about 8 percent of domestic importers in fiscal year 2003. Our panel of experts also concluded that equivalence agreements or less comprehensive alternatives represent an effective approach for ensuring the safety of imported seafood and would also shift some of the burden for ensuring that imported seafood meets U.S. HACCP requirements to exporting countries. Furthermore, the panel suggested that FDA concentrate its efforts on first developing agreements with countries known to have high-quality food safety systems, thereby allowing FDA to focus its limited inspection resources on countries known to have lesser quality food safety systems. We also acknowledge that time and resources are a necessary factor in negotiating such agreements. However, we note that FDA has entered into similar agreements with several countries that export fresh and frozen shellfish products (fresh and frozen oysters, clams, mussels, and whole or roe-on scallops) to the United States. By reaching agreements through individual memorandums of understanding with Canada, Chile, Mexico, New Zealand, and South Korea, FDA acknowledged that the foreign countries’ shellfish sanitation programs meet U.S. standards. If it chose to do so, FDA could enter into these types of agreements with countries that export seafood products to the United States as well. We also note that CFIA has established 14 agreements with foreign exporting countries, including agreements for seafood products. According to CFIA officials, these agreements allow CFIA to decrease the rate of inspection for products from participating countries and direct its resources to higher risk products from countries without such agreements. In addition, CFIA believes that such agreements provide a vehicle for increased communication, thereby allowing the exporting nation to take corrective actions at violating firms discovered during CFIA’s verification inspections. To ensure that FDA takes prompt regulatory action when its investigators find food safety violations during importer and foreign firm visits, we recommended in our 2001 report that FDA communicate deficiencies to port-of-entry personnel so that they can examine potentially contaminated imported seafood before it can enter the United States. Although FDA agreed with this recommendation, we found that it continues to experience long delays between finding deficiencies and taking action, such as issuing a warning letter or detaining a product. As a result, potentially contaminated seafood could be entering the U.S. market. Once FDA investigators complete an inspection of U.S. importer’s documentation or of a foreign firm’s processing plant, they submit a recommendation and/or report to headquarters, which decides on regulatory action. As explained below, FDA issues either untitled letters or warning letters to inform responsible officials of violations found during the inspection and to afford the officials the opportunity to voluntarily take appropriate and prompt corrective action prior to the initiation of enforcement action. The use of these letters is based on the expectation that a majority of inspected firms will voluntarily comply. FDA issues untitled letters when the documented violations do not meet the criteria for detention. Untitled letters may address, for example, the foreign company’s failure to have its HACCP plan list sulfites, an allergen; failure to monitor the safety of water; or failure to maintain the cleanliness of food contact surfaces. These letters do not set time frames for taking corrective action and do not require a response from the firm. FDA also issues warning letters when it finds violations that can directly affect product safety, such as no controls for scrombrotoxin, which is a toxin most commonly found in tuna, mahi-mahi, and bluefish that can cause severe allergic reactions and diarrhea. These letters could lead to enforcement action, such as product detention, if the company does not promptly and adequately correct the problem. To ensure prompt and adequate correction, FDA requires that warning letters be issued within 30 work days—approximately 45 calendar days. However, FDA is not required to issue letters to firms prior to taking enforcement action. The agency has the authority to take immediate enforcement action, such as detaining a firm’s products. Under section 801(a) of the Federal Food, Drug, and Cosmetic Act, FDA can refuse admission of imported products on the basis of information that the product “appears” to be in violation of food safety requirements. When the violations remain uncorrected despite prior warnings, FDA headquarters notifies field offices by listing the firm and product on an Import Alert, ordinarily the next course of action. According to FDA officials, now that the requirements of seafood HACCP are well established, the agency intends to use its refusal authority as the lead action without prior warning to prevent the products of problem foreign processors from entering the country. Our analysis of foreign firm inspections shows that the agency used this authority for one firm in fiscal year 2002. According to our review of inspection records for 99 of 108 foreign firms that the agency visited in fiscal year 2002, FDA is encountering significant delays in issuing warning letters when serious violations are identified. During its inspections, FDA found that of these 99 foreign firms, 40 had serious violations that warranted regulatory action. For 20 of these 40 firms, FDA decided to issue a warning letter. However, FDA took an average of 157 calendar days to issue these warning letters. As shown in figure 5, all 20 warning letters exceeded FDA’s time frame requirement of approximately 45 calendar days. Fourteen of these 20 warning letters were issued to firms producing high- risk products—such as semipreserved fish products, including smoked, salted, and fermented fish that are susceptible to the growth of bacteria, including Clostridium botulinum. This bacteria produces a toxin that can cause gastroentiritis, vertigo, and respiratory failure. For the other 20 firms that did not receive warning letters, FDA issued untitled letters to 14 firms and is considering what action to take for the remaining 6 firms. Appendix III provides a more detailed analysis of FDA’s foreign firm inspections in fiscal year 2002. In addition to failing to issue warning letters in a timely manner, FDA encountered significant delays in alerting port-of-entry personnel to detain imported seafood shipments from firms identified with serious safety problems. On average, the agency took 348 calendar days to alert port-of- entry personnel about such products coming from 6 of the 99 foreign firms that the agency inspected in fiscal year 2002. Moreover, 4 of the 6 firms involved were processing high-risk products, which should have caused FDA to take more prompt enforcement action. By not taking timely enforcement actions and communicating these actions to U.S. port-of-entry personnel, FDA increases the likelihood that unsafe products will enter the U.S. market. Similar delays occurred when FDA investigators found problems with U.S. importers’ records. For the 96 inspection forms we reviewed, FDA found that 16 importers had serious violations, such as failure to have the required documentation. The agency issued warning letters to 8 of these importers. The average time elapsed between the date of the inspection and issuance of the warning letter was 103 calendar days; only 2 letters were issued within the required 45 calendar days. Furthermore, 5 of the warning letters covered high-risk products, including scombrotoxin- susceptible seafood, which, if not properly handled, could cause serious health problems requiring hospitalization, particularly for elderly individuals. FDA officials acknowledged that these delays are excessive and unacceptable and attributed them to a change in personnel responsible for reviewing and issuing these letters. In addition, these officials stated that the time frames were exceeded because the agency has been compelled to give precedence to other public health concerns, such as developing programs to protect the food supply against terrorist threats. Finally, we found that FDA does not prioritize enforcement actions when violations that pose the most serious public health risk occur or have an automated system for tracking the time involved in documenting, reviewing, and processing enforcement actions. As a result of increased funding, FDA recently increased the number of personnel responsible for reviewing and issuing these letters and expects to substantially increase its timeliness. Additionally, FDA is in the early stages of developing an automated system that will track the time involved in documenting, reviewing, and processing enforcement actions. Several options could help FDA overcome some of the problems we identified with its current regulatory approach for ensuring the safety of imported seafood. These options could also help to augment FDA’s inspections of foreign seafood firms, port-of-entry product examinations, and testing of imported seafood. However, each option presents certain challenges that FDA would need to address. First, NOAA could provide staff from its Seafood Inspection Program to augment FDA’s inspections capabilities, and FDA is considering the advantages and disadvantages of doing so. However, some FDA officials are concerned about the cost of using NOAA and about a perceived conflict of interest because NOAA’s inspections are fee-for-service. Second, FDA could contract with state regulatory laboratories to augment its current capacity to analyze imported seafood samples, but our expert panel and FDA officials said that most state laboratories might not have excess capacity to assist FDA. Third, FDA could use private laboratories to assist in screening seafood samples, provided that FDA first attests to the laboratories’ capabilities to perform the work. Finally, if it has the authority, FDA could use third-party inspectors to conduct HACCP inspections of foreign processing firms and domestic importers; however, FDA would need to certify the inspectors’ competency. FDA has not undertaken a comprehensive review of its legal authorities in this area. NOAA officials said that they could assist FDA by providing various services to augment FDA’s regulatory program for imported seafood. These services include HACCP training, port-of-entry inspection and product sampling, and assistance in developing and verifying equivalence or other types of agreements with seafood exporting countries. NOAA officials also said that they could conduct some domestic seafood inspection services that FDA currently conducts, which would allow FDA to refocus some of its resources on imported seafood. For example, NOAA inspectors could certify domestic seafood products shipped to the European Union and other countries, which is a service that NOAA provided in the past on a fee-for-service basis. Also, FDA and NOAA could agree to recognize NOAA’s current inspections of approximately 240 domestic processing firms and authorize NOAA to inspect other domestic firms for compliance with HACCP. NOAA officials estimate that they could provide FDA with up to 22 full-time-equivalent field inspectors as well as additional technical support staff in its headquarters office. In addition, NOAA and FDA officials are now negotiating the terms of an agreement to use two NOAA laboratories to screen imported shrimp samples for the antibiotic chloramphenicol. FDA is taking this action to increase its testing capacity in response to the detection of the drug in imported shrimp by food safety authorities in Europe, Canada, and some U.S. states. Chloramphenicol is banned for use in food-producing animals because there is no known safe level for human ingestion of this substance. If the negotiations succeed, FDA would increase its screening capacity by 400 samples per year. FDA recognizes that it has the authority to use NOAA and is considering the advantages and disadvantages of doing so. While one official raised concerns about a public perception of potential conflicts of interest because NOAA inspections are fee-for-service, others said that this potential problem could be addressed in an agreement between the two agencies. Additionally, NOAA officials said that this concern could be alleviated, in whole or in part, through its receipt of direct appropriations to conduct these activities and/or through contracts with FDA that use appropriated funds. Also, FDA-sponsored inspector training and periodic audits of NOAA activities could further address such perceptions. FDA officials also pointed out that it would have to incur costs to provide training to NOAA inspectors and would have to develop an agreement with NOAA specifying how NOAA would conduct inspections and investigations on FDA’s behalf. We agree that FDA would need to incur additional costs to use NOAA inspectors and laboratories, but these costs may be less than those FDA would incur if the agency were to hire and train investigators and laboratory analysts without prior seafood experience. FDA is testing only a small fraction of the seafood entering the United States, about 1.2 percent in fiscal year 2002. Our panelists and past GAO reports have stated that port-of-entry laboratory testing is an ineffective “overall” approach for ensuring the safety of imported seafood. Nevertheless, our panelists believed that increased testing is desirable as one approach for verifying the presence of biological, chemical, or drug residues. Therefore, they stated that using state regulatory laboratories to augment FDA’s seafood testing, such as state Departments of Health or Agriculture, would be beneficial because state laboratories are well equipped for food testing and provide reliable these laboratories have procedures in place that could meet FDA’s standards for compliance testing, and FDA’s use of state laboratories could improve coordination and information exchange regarding seafood-testing results between state laboratories and FDA. However, the panelists noted a disadvantage to using state regulatory laboratories. Many states are financially constrained and therefore may not have the excess capacity, equipment, time, or qualified analysts to assist FDA. Furthermore, if FDA were to consider using state laboratories to assist with port-of-entry testing, it would have to ensure that all laboratories are using appropriate sampling and testing methodology. While FDA laboratory officials agreed that using state regulatory laboratories could be beneficial, they expressed some concerns regarding using the laboratories to support FDA regulatory action. FDA officials agreed that state regulatory laboratories are likely to have established chain-of-custody procedures—that is, state laboratories control the sample from the time they receive it through the sample analysis so that the sample is not inappropriately altered. Additionally, FDA officials said state laboratories would be required to meet all FDA analysis and data requirements. However, using these laboratories may be a costly alternative because FDA would have to provide training and oversight in addition to the cost required to conduct the analyses. Furthermore, FDA officials noted that states may not have excess capacity to assist FDA. Despite these concerns, FDA is considering a pilot program with Florida to determine how it could use state laboratory results. This pilot program is similar to FDA’s proposed agreement with NOAA for testing imported shrimp for the drug chloramphenicol. Under the proposed pilot program with Florida, FDA would collect the samples and the state laboratory would screen them for traces of chloramphenicol residues. The state laboratory would also perform the more sophisticated confirmation testing on the positive screens, which FDA could then use to take regulatory action. According to FDA officials, the agency must first determine the level of seafood sampling to perform given its other competing public health priorities. They said that considerable funding would be required to establish a meaningful laboratory assistance program with outside sources. Currently, FDA does not accredit or use any private laboratories to collect or analyze seafood samples. However, for some seafood violations, it does allow seafood firms to use private laboratories to provide evidence that imported seafood previously detained because of safety concerns is now safe and can be removed from the detention list at the port of entry. To assist FDA in analyzing more imported seafood, our panel recommended that FDA accredit private laboratories that comply with FDA’s testing methodologies. This option would also provide FDA with greater assurance about the quality of the laboratories importers use to demonstrate that their detained products are safe and can be released into commerce. FDA officials said that using private laboratories to conduct screenings could result in increased analytical capacity, but this option would require more agency oversight, thereby making it a costly alternative. We note, however, that FDA currently accepts the results from private laboratories that importers provide to the agency to demonstrate that products detained at ports of entry are safe and can be released into commerce. FDA also noted that these private laboratories generally follow the appropriate methodology for sampling, documentation, chain-of-custody, and analysis. The agency performs a detailed review of the laboratories’ sampling and testing methodology for each individual submission to FDA, but this review is not an overall quality assurance review of the entire laboratory and should not be taken as a general endorsement of the submitting laboratory. As with state laboratories, if FDA were to use private laboratory results to take regulatory action, it would be required to provide training and oversight in addition to funding. However, FDA officials stated that in their view, these laboratories are generally not equipped to perform confirmation testing due to the expense and expertise required. Furthermore, since private laboratories would continue to provide laboratory analysis to the industry that FDA regulates, the agency would have additional responsibilities to eliminate conflict of interest and protect any regulatory testing from bias. In the absence of equivalence agreements, FDA could consider developing a program that uses certified third-party firms to conduct HACCP inspections on its behalf, both at foreign processing firms and domestic importers. The Department of Health and Human Services has begun to take this approach by accrediting third-parties to inspect manufacturers of medical devices, as authorized by Congress. However, no similar specific legislation exists permitting third-party inspection of seafood firms, and FDA has not undertaken a comprehensive review of its authorities to accredit private third-parties to inspect seafood firms. Our expert panel believes that industry should pay for the use of these third-parties to shift some of the burden from FDA to support the costs associated with such a service. Following this approach, FDA could inspect more foreign firms and importers without incurring substantial additional costs. However, FDA is concerned that a fee-for-service arrangement for these services would create a public perception of a conflict of interest. According to our panel, to combat this potential problem, FDA would have to implement a system of oversight to ensure that the third-parties are adequately performing their duties. Finally, domestic importers could use the accredited third-party firms to demonstrate that their seafood products were processed in accordance with HACCP requirements. Since FDA first issued the HACCP regulations for seafood safety in 1997, U.S. seafood importers and foreign firms have made some progress in implementing and demonstrating compliance with FDA’s seafood safety requirements. However, FDA is still verifying compliance at only a small number of seafood importers and foreign firms. Similarly, FDA’s port-of- entry product examination and testing is, and will continue to be, limited. In addition, FDA is no longer making it a priority to negotiate equivalence agreements with seafood-exporting countries, which remains one of the most effective methods for ensuring the safety of imports. Indeed, our panel of seafood safety experts believes that these agreements would help FDA reduce its reliance on importer and port-of-entry inspections and would enable the agency to leverage its staff resources by sharing the responsibility for seafood safety with exporting countries, especially those that are known to produce safe seafood. Coupled with the lack of timely compliance and enforcement action, FDA’s efforts to ensure the safety of imported seafood continue to provide insufficient protection to consumers. Unless other options for strengthening these efforts are explored, the risk of unsafe products released into the U.S. market will continue. To more efficiently and effectively monitor the safety of imported seafood, we recommend that the Secretary of Health and Human Services direct the Commissioner of FDA to work toward developing a memorandum of understanding with NOAA that leverages NOAA’s Seafood Inspection Program’s resources. The memorandum of understanding should address mutually agreeable protocols and training programs that are necessary to begin using NOAA employees to provide various services. Those services could include inspections of foreign firms, importer inspections, port-of- entry examinations and sample collections, and laboratory analyses. To strengthen FDA’s current imported seafood program and ensure the safety of seafood consumed in the United States, the Commissioner of FDA should take the following five actions: make it a priority to establish equivalence or other similar types of agreements with seafood-exporting countries, starting first with countries that have high-quality food safety systems; develop and implement a system to track the time involved in documenting, reviewing, and processing regulatory and enforcement actions, such as issuing warning letters and detaining unsafe products, so that FDA can identify the reasons for the delays and take actions to address them; give priority to taking enforcement actions when violations that pose the most serious public health risk occur; consider the costs and benefits of implementing an accreditation program for private laboratories; and explore the potential of implementing a certification program for third- party inspectors, which would involve reviewing FDA’s legal authorities and considering the costs and benefits, including developing and implementing the standards, controls, and oversight necessary to provide FDA with reasonable assurance that third-party inspectors are qualified and independent. We provided FDA and NOAA with a draft of this report for review and comment. We received written comments from the Commissioner, FDA, which are presented in appendix IV. FDA also provided technical corrections, which we have incorporated into the report as appropriate. We received a letter from the Chief Administrative Officer, NOAA, stating that the agency did not have any comments. The letter is presented in appendix V. Regarding the six specific recommendations we made in this report, FDA generally concurred with five and disagreed with one. FDA generally concurred that it should (1) work toward developing a memorandum of understanding with NOAA that leverages NOAA’s Seafood Inspection Program’s resources; (2) develop and implement a system to track the time involved in documenting, reviewing, and processing regulatory and enforcement actions so that FDA can identify the reasons for the delays and take actions to address them; (3) give priority to taking enforcement actions when violations that pose the most serious public health risk occur; (4) consider the costs and benefits of implementing an accreditation program for private laboratories; and (5) explore the potential of implementing a certification program for third-party inspectors. Since we will be reviewing FDA’s implementation of third-party inspections under the Medical Device User Fee and Modernization Act of 2002, FDA could use the results of this review in assessing the potential to use third-party inspectors for imported seafood. FDA did not concur with our recommendation to make it a priority to establish equivalence or other similar types of agreements with seafood- exporting countries, starting first with countries that have high-quality food safety systems. In commenting on this recommendation, FDA said the agency is not currently positioned to assign high priority to negotiating equivalence or other types of agreements with numerous countries that export seafood to the United States in light of the pressing priorities associated with implementation of the Bioterrorism Act. FDA also said that establishing these agreements is extraordinarily resource intensive. We agree that the process for creating these agreements is complex and resource intensive; however, we continue to believe that it should be a priority for FDA to negotiate equivalence or other less comprehensive agreements with seafood exporting countries to leverage its limited inspection resources. Additionally, FDA should view the creation of these agreements as a long-term investment in improving imported seafood safety. In the absence of equivalence or other agreements such as memorandums of understanding with seafood-exporting countries, FDA must continue to rely principally on reviews of importer records to determine whether imported seafood is produced under acceptable food safety systems. FDA also raised some concerns about inferences that could be drawn from the report. For example, FDA said that our draft report implied that seafood has a higher likelihood of causing foodborne illness than other foods on the basis of a comparison of the number of foodborne illness outbreaks in the United States from seafood-related causes than from meat and poultry. FDA also said that our draft report did not acknowledge that foodborne illness outbreaks associated with seafood also include those from recreational and subsistence fishing, over which the federal government has little or no control. We modified this report to include the actual number of cases associated with seafood and meat and poultry outbreaks. We also added CDC’s observation that foodborne illness outbreaks are generally underreported and that it is easier to identify the source of some diagnosable illnesses, such as scombroid poisoning from seafood, than illnesses that result from nonspecific gastrointestinal symptoms caused by other foods. Additionally, we added FDA’s comment that some seafood-related illnesses may be caused by recreational or subsistence fishing, over which the federal government has little or no control. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to interested congressional committees; the Secretary of Health and Human Services; the NOAA Administrator; the United States Trade Representative; the Director, Office of Management and Budget; and other interested parties. We will make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-3841. Key contributors to this report are listed in appendix VI. To reevaluate the Food and Drug Administration’s (FDA) program for ensuring the safety of imported seafood and determine the status of efforts to implement our previous recommendations, we interviewed cognizant government and industry officials. Specifically, we interviewed officials and/or reviewed documents from the following FDA units: Center for Food Safety and Applied Nutrition’s Office of Seafood, Office of Compliance, and Office of Constituent Operations; Office of Regulatory Affair’s Office of Enforcement, Office of Regional Operations, and Office of Resource Management; Office of Chief Counsel; and Office of International Programs. We also visited the FDA district office in Bothell, Washington, where large volumes of seafood are processed, and we met with FDA officials to discuss relevant regulations, policies, and procedures. We also visited two U.S. importers to observe FDA’s importer inspection process firsthand and to discuss their views. To assess the progress that FDA has made since our 2001 report, we analyzed the agency’s inspection records of U.S. importers. Specifically, we randomly selected a probability sample of 117 inspections from a list of 415 importer inspections that nominally represented all importer inspections conducted by FDA for fiscal year 2002. From this sample, 13 inspections were outside the scope of this assignment—for example, they were for molluscan shellfish or the seafood actually was a domestic product. In addition, for 8 additional in-scope inspections, FDA could not locate complete documentation (6 inspections); and FDA did not complete a standardized inspection form (Form 3502) at the time of the inspection (2 inspections). For the 96 in-scope inspections for which documentation was found, we analyzed the Form 3502 that investigators completed for each imported seafood product during fiscal year 2002. The 96 inspections were associated with a total of 112 Forms 3502. Because we followed a probability procedure based on a random selection of inspections (and thereby products), our sample is only one of a large number of samples we might have drawn. Since each sample could have provided different estimates, we express the confidence in the precision of our particular sample’s results as 95 percent confidence intervals (e.g., ± 7 percentage points). These are intervals that would contain the actual population values for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. The estimate that 48 percent of U.S. importers’ products had the required documentation is surrounded by a 95 percent confidence interval that ranges from 36 percent to 60 percent. We estimated that 4 percent of the FDA inspection forms erroneously indicate that the United States has an equivalence agreement with the exporting country. This estimate is surrounded by a 95 percent confidence interval that ranges from 1 percent to 10 percent. To assess FDA’s progress with regard to inspections of foreign firms, we obtained 107 of 108 foreign inspection reports for fiscal year 2002 for the 13 countries that FDA visited—Brazil, China, Costa Rica, Honduras, Iceland, Jamaica, Mexico, Poland, Taiwan, Thailand, Trinidad and Tobago, Uruguay, and Vietnam. Of these 107 inspection reports, we removed 8 because they covered shellfish, which was outside the scope of our review. We compared FDA’s findings for the remaining 99 inspections with FDA’s actions at U.S. ports of entry. For the sample of importer inspections and the entire set of the foreign firm inspections, FDA provided inspection results in hard copy because FDA investigators do not transmit information electronically. FDA also provided us with summary data from the system used to maintain inspection results for our analyses of foreign firm and importer inspections. We conducted a data reliability assessment of the importer and foreign firm inspection information, which indicated that the data and data systems used by FDA were sufficiently reliable and complete to perform our analyses. To assess the time frames for issuing warning letters and other pertinent information, we analyzed the 20 warning letters FDA issued following its foreign firm inspections and the 8 warning letters FDA issued following its U.S. importer inspections conducted during fiscal year 2002 that FDA determined warranted enforcement action. Recognizing FDA’s time frame of 30 work days for FDA to process a warning letter, we did not consider any warning letter issued within 45 calendar days after the date of inspection as having exceeded FDA’s issuance time frame. In addition, we interviewed and/or received documents from the National Oceanic and Atmospheric Administration’s (NOAA) National Marine Fisheries Service, Seafood Inspection Program, and National Sea Grant Program. To obtain industry’s views on the Hazard Analysis and Critical Control Point (HACCP) system for seafood and FDA’s oversight of seafood firms, we also met with the National Fisheries Institute—a seafood trade association whose membership includes domestic and international firms. We also met with the Center for Science in the Public Interest—a consumer organization focusing on nutrition and food safety—which investigates and reports on outbreaks of foodborne illnesses. Finally, we spoke with officials from the Canadian Food Inspection Agency to discuss their regulations for ensuring the safety of imported seafood and to gain insight on agreements that Canada established with other foreign countries’ food inspection authorities. We also received information from the Department of Agriculture about its program for requiring equivalence determinations before allowing exported meat and poultry products to enter the United States. However, the scope of this review did not include exploring whether Agriculture could make inspection or other resources available to augment FDA’s seafood inspection program. To explore other options for enhancing FDA’s existing imported seafood safety program, we assembled a panel of recognized experts on the following seafood-related areas: seafood policy, laws, and regulations (including HACCP); public health, epidemiology, and microbiology; risk management and assessment; and international trade policy. With advice from the National Academies, we selected 63 seafood safety experts as potential panelists. From these 63 contacts, we chose the final nine panelists on the basis of the following criteria: (1) recommendations we received from the National Academies and participation on previous academy panels; (2) recommendations from others knowledgeable in the field of seafood safety; (3) the individual’s area of expertise and experience; (4) the type of organization represented, including academic institutions, seafood industry, trade groups, and consumer groups; and (5) geographic representation. (The names and affiliations of the panel members are listed in app. II.) On July 2, 2003, we held an all-day meeting with the nine panelists at our office in Washington, D.C. Before the meeting, we provided each panel member with a set of four general discussion questions. At the end of each discussion, we asked the panelists to respond, using an anonymous ballot, to a set of questions that were based on the general discussion topics. We recorded and transcribed the meeting to ensure that we accurately captured the panel members’ statements. We conducted our review from February 2003 through November 2003 in accordance with generally accepted government auditing standards. This appendix provides the names and affiliation of our expert panel members and summarizes the discussions held at the all-day meeting. The information presented in this appendix may not represent the views of every member of the panel. Also, this information should not be considered to be the views of GAO. The following individuals were members of our expert panel on the safety of imported seafood: Haejung An, Associate Professor, Department of Nutrition and Food Science, Auburn University; Tom Chestnut, Vice President, Total Quality, Darden Restaurants; Bob Collette, Vice President, Science and Technology, National Cameron Hackney, Dean, Davis College of Agriculture, Forestry and Consumer Sciences, West Virginia University; Michael Jahncke, Director, Virginia Seafood Agricultural Research and Extension Center, Virginia Polytechnic Institute and State University; Michael Moody, Professor and Head, Department of Food Science, W. Steven Otwell, Professor, Seafood Technology, Department of Food Science and Human Nutrition, University of Florida; Barbara Rasco, Associate Professor, Department of Food Science and Human Nutrition, Washington State University; and Caroline Smith DeWaal, Director, Food Safety Approach, Center for Science in the Public Interest. On July 2, 2003, we held an all-day meeting with the nine panelists at our office in Washington, D.C. Before the meeting, we provided each panel member with a set of four general discussion questions. At the end of each discussion, we asked the panelists to respond, using an anonymous ballot, to a set of questions that were based on the general discussion topics. We recorded and transcribed the meeting to ensure that we accurately captured the panel members’ statements. The panelists discussed two overarching themes: (1) changes that FDA has made to improve its ability to ensure imported seafood safety and (2) options for improving FDA’s current regulatory approach. Since our last report on this matter in 2001, FDA has made changes to its approach for ensuring the safety of imported seafood. Panelists specifically discussed these changes, including (1) a shift in focus from inspecting foreign countries’ entire food safety systems for equivalence to inspecting more foreign firms for HACCP compliance, (2) a slight increase in the number of port-of-entry examinations and laboratory testing of imported seafood, and (3) an increase in testing for aquaculture drug residues. Specifically: Panelists suggested that inspecting a small number of foreign firms for HACCP compliance, rather than inspecting foreign countries’ entire food safety systems for equivalence, is ineffective because FDA only inspects about 100 seafood firms in 10 countries annually, out of a universe of an estimated 13,000 firms in about 160 countries. Panelists believed that increasing the number of port-of-entry examinations and laboratory testing for imported seafood, while desirable, would be ineffective because this approach is not consistent with the preventative HACCP approach. Because regulatory authorities around the world are increasingly finding aquaculture drug residues, the panelists believed that more testing for drug residues would be a valuable verification step in an effective HACCP system. Furthermore, panelists believed that FDA should shift its focus to the source of production to prevent the abuse of legal substances or the use of banned aquaculture drugs. Panelists recommended that FDA establish equivalence agreements in order to more efficiently utilize its limited resources. They believed that equivalence agreements would be more effective than FDA’s direct inspection of foreign firms for ensuring HACCP compliance and would also allow the agency to focus resources on the countries, firms, and products that pose the greatest risk, thereby shifting the burden for HACCP compliance from FDA to foreign governments and foreign firms. Panelists stated that such agreements should not imply that FDA must find a foreign government’s seafood safety system “equal” to that of the U.S. system. For example, panelists said that FDA should have flexibility in terms of what it considers equivalent and should also consider alternatives to country-to-country agreements (e.g., product-to-country, company-to-country, and hazard-specific agreements). The panel recommended that FDA first consider one-way equivalence agreements, with counties where the United States imports large quantities of seafood but does not export significant quantities. Although panelists noted that two-way agreements are preferred, they believed that using one-way equivalence agreements initially would better ensure that foreign firms are meeting U.S. standards. However, U.S. seafood exporters may object to one-way agreements, arguing that these would favor the foreign countries, which may have barriers to U.S. exports. Panelists recommended that FDA establish a timeline for agreements, although there was no consensus on the best way to develop this timeline. Possible suggestions included a phased-in process, based on the quantity of exports to the United States, and the establishment of agreements based on the willingness of participants. Panelists believed that Congress should mandate that FDA establish equivalence agreements; however, FDA should be allowed to determine how the agreements are structured and implemented. The panel also expressed concern that our trading partners could view mandating equivalence as protectionist. Additionally, panelists said FDA should still implement third-party certification and auditing if equivalence is mandated. The panel believed that FDA should provide additional training and education to foreign governments and foreign firms on HACCP requirements, and that industry should pay for this training. Panelists recommended that FDA identify competent inspection authorities to establish lists of preferred suppliers, in which the foreign government inspects firms wishing to export to the United States, to assure the agency that these firms meet HACCP requirements. By adopting this approach, FDA could then target inspection and testing resources to nonpreferred suppliers. Panelists recommended that FDA develop an accreditation program for private laboratories that demonstrate compliance with FDA’s testing methodologies. FDA could then establish a list of approved, accredited domestic laboratories to augment their port-of-entry testing for compliance and enforcement. Additionally, domestic importers could use the accredited foreign and domestic laboratories to demonstrate, through testing, that their seafood products were processed in accordance with HACCP requirements. Panelists believed that most domestic private laboratories are capable of meeting FDA’s standards, such as sample chain-of-custody, laboratory procedure, and qualified analysts, and could provide timely results. Panelists recommended that FDA establish a standardized program to certify private, third-party inspectors to conduct HACCP inspections of foreign processing firms and domestic importers. The third-party inspectors would be paid for by industry and monitored by FDA, thereby allowing for more foreign firm and importer inspections at little additional cost to FDA. Panelists recommended that FDA place more responsibility on foreign governments to ensure that foreign firms are aware of, and are meeting, their responsibilities under HACCP. Under an effective HACCP system, the panelists felt that FDA’s emphasis should be on inspection and testing in the foreign country where the seafood is harvested and processed and where hazards are introduced. Panelists recommended that when problems are discovered as a result of inspections of foreign firms or importers, FDA should discuss with the exporting countries how to prevent these problems from reoccurring. Panelists suggested that state regulatory laboratories (e.g., those operated by the state Department of Health or Agriculture) may be a good option for assisting FDA in testing imported seafood products, particularly in those states with ports and seafood industries. State laboratories provide comparable testing for state regulatory authorities and have procedures in place that could meet FDA’s standards for compliance testing. State laboratories are also well equipped for food testing and provide reliable results. Panelists did note, however, that most states are financially constrained, and therefore state laboratories may not have any excess capacity (e.g., qualified analysts, equipment time, or laboratory space) to analyze additional samples for FDA. Furthermore, in order to use the facilities, FDA would need to harmonize testing methodologies. Panelists suggested that FDA use the National Marine Fisheries Service laboratories in Pascagoula, Mississippi, and Seattle, Washington, to augment testing at ports of entry. Panel members believed that this was a good option for FDA because these laboratories currently conduct seafood research and testing. Panelists did not recommend that FDA use academic laboratories for testing at ports of entry. They stated that most academic laboratories are not structured to do compliance testing and would not meet FDA’s standards for chain of custody of the samples or acceptable documentation for compliance or enforcement actions. The following are GAO’s comments on the Food and Drug Administration’s letter dated January 8, 2004. 1. We modified our report to state that although FDA does not have an automated system for computing the time it takes to review warning letter and untitled letter recommendations, it is in the early stages of developing such a system. This system will enable FDA to track the time involved in documenting, reviewing, and processing enforcement actions. 2. We modified this report to include the actual number of cases associated with seafood and meat and poultry outbreaks. We also added the Centers for Disease Control and Prevention’s observation that foodborne illness outbreaks are generally underreported and that it is easier to identify the source of some diagnosable illnesses, such as scombroid poisoning from seafood, than illnesses that result from nonspecific gastrointestinal symptoms caused by other foods. Additionally, we added FDA’s comment that some seafood-related illnesses may be caused by recreational or subsistence fishing, over which the federal government has little or no control. 3. As shown in our report, FDA inspects only a small percentage of U.S. importers, examines and samples a very small amount of imported seafood at U.S. ports of entry, and inspects few seafood firms in foreign countries each year. In the absence of equivalence or other agreements such as memorandums of understanding with seafood-exporting countries, FDA must continue to rely principally on reviews of importer records to determine whether imported seafood is produced under acceptable food safety systems. For these reasons, we continue to believe that FDA should develop such agreements as quickly as possible. Moreover, FDA acknowledged in its final HACCP rule, issued in December 1995, that in the absence of significant numbers of agency inspections of foreign processing facilities, a memorandum of understanding can be the most efficient and effective mechanism for ensuring that foreign processing plants are operating in compliance with the requirements of the regulations. 4. We modified this report to include FDA’s basis for issuing these letters. 5. We acknowledge that establishing equivalence or other agreements is complex and resource intensive. However, we continue to believe, as supported by our panel of nationally recognized food safety experts, that equivalence agreements or less comprehensive alternatives, such as compliance agreements or memorandums of understanding represent a more effective long-term approach for ensuring the safety of imported seafood and would allow FDA to leverage its staff resources by shifting some of its regulatory burden to exporting countries. Also, U.S. importers would be able to rely on the foreign regulatory authority to ensure compliance with HACCP requirements by foreign processors. Also see comment 3. 6. Our report recognizes that FDA is beginning to take action to develop an automated system to track the time involved in documenting, reviewing, and processing regulatory actions. Also see comment 1. In addition to the individuals named above, John C. Smith, Kenya Jones, and Lisa Vojta made key contributions. Other contributors included, Aldo Benejam, Oliver Easterwood, Lynn Musser, Cynthia Norris, Paul Pansini, Katherine Raheb, Carol Herrnstadt Shulman, Sidney Schwartz, and Kathy Summers. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
More than 80 percent of the seafood that Americans consume is imported. The Food and Drug Administration (FDA) is responsible for ensuring that imported seafood is safe and produced under sanitation and safety systems comparable to those of the United States. Since GAO reported in 2001 that FDA's seafood inspection program did not sufficiently protect consumers, additional concerns have arisen about imported seafood containing banned substances, such as certain antibiotics. In this review, GAO was asked to evaluate (1) FDA's progress in implementing the recommendations in the 2001 report and (2) other options to enhance FDA's oversight. Since GAO's January 2001 report, FDA's imported seafood safety program has shown some improvement. FDA inspects more foreign firms, and its inspections show that more U.S. seafood importers are complying with its requirements. FDA also slightly increased the number of seafood products it tests at U.S. ports of entry to just over 1 percent. However, FDA still has not established equivalence agreements with seafood exporting countries as GAO recommended in its 2001 report. Equivalence agreements that commit U.S. trading partners to maintain comparable food safety systems are an efficient way to ensure imported seafood safety. Unlike the U.S. Department of Agriculture, FDA is not legally required to certify that countries exporting food products to the United States have equivalent food safety systems. According to a panel of nationally recognized experts that GAO convened to address this and other issues, establishing these types of agreements would shift some of FDA's burden for ensuring seafood safety to foreign governments. This shift, in turn, would allow FDA to focus its limited resources on seafood products from countries with less advanced food safety systems. FDA also made little progress regarding the recommendation GAO made in 2001 that FDA communicate to U.S. port-of-entry personnel serious deficiencies identified during inspections so that potentially contaminated imported seafood is examined before it enters the United States. GAO found that FDA continues to experience long delays between finding deficiencies and taking action. For example, GAO's review of foreign firm inspection records found that it took an average of 348 days for FDA to alert port-ofentry personnel about serious safety problems identified at six foreign firms. Moreover, GAO found that FDA does not prioritize enforcement actions when violations that pose the most serious public health risk occur or have an automated system to track the time involved in documenting, reviewing, and processing enforcement actions. FDA officials acknowledged some of the problems that GAO identified regarding FDA's current imported seafood inspection program, but they also raised concerns about limited inspection resources and competing priorities, such as the recent need to implement provisions of the Bioterrorism Act of 2002. GAO identified several options that FDA could consider to augment its resources and enhance its current program, including (1) commissioning seafood inspectors from the National Oceanic and Atmospheric Administration's (NOAA) Seafood Inspection Program, (2) using state regulatory laboratories and/or private laboratories to augment FDA's testing of imported seafood, and (3) developing a program to use third-party inspectors to augment its program.
Critical infrastructures are physical or virtual systems and assets so vital to the nation that their incapacitation or destruction would have a debilitating impact on national and economic security, public health, and safety. These systems and assets—such as the electric power grid, chemical plants, and water treatment facilities—are essential to the operations of the economy and the government. Recent terrorist attacks and threats have underscored the need to protect our nation’s critical infrastructures. If vulnerabilities in these infrastructures are exploited, our nation’s critical infrastructures could be disrupted or disabled, possibly causing loss of life, physical damage, and economic losses. Although the vast majority of our nation’s critical infrastructures are owned by the private sector, the federal government owns and operates key facilities that use control systems, including oil, gas, water, energy, and nuclear facilities (see fig. 1). Control systems are computer-based systems that are used within many infrastructures and industries to monitor and control sensitive processes and physical functions. Typically, control systems collect sensor measurements and operational data from the field, process and display this information, and relay control commands to local or remote equipment. Control systems perform functions that range from simple to complex. They can be used to simply monitor processes—for example, the environmental conditions in a small office building—or to manage the complex activities of a municipal water system or a nuclear power plant. In the electric power industry, control systems can be used to manage and control the generation, transmission, and distribution of electric power (see fig. 2). For example, control systems can open and close circuit breakers and set thresholds for preventive shutdowns. The oil and gas industry uses integrated control systems to manage refining operations at plant sites, remotely monitor the pressure and flow of gas pipelines, and control the flow and pathways of gas transmission. Water utilities can remotely monitor well levels and control the wells’ pumps; monitor flows, tank levels, or pressure in storage tanks; monitor water quality characteristics such as pH, turbidity, and chlorine residual; and control the addition of chemicals to the water. Control systems are also used in manufacturing and chemical processing. Chemical reactors may use control systems to produce chemicals or regulate temperatures within the production process. Installing and maintaining control systems requires a substantial financial investment. DOE cites research estimating the value of the control systems used to monitor and control the electric grid and the oil and natural gas infrastructure at $3 billion to $4 billion. The thousands of remote field devices represent an additional investment of $1.5 billion to $2.5 billion. Each year, the energy sector alone spends over $200 million for control systems, networks, equipment, and related components and at least that amount in personnel costs. There are two primary types of control systems: distributed control systems and supervisory control and data acquisition (SCADA) systems. Distributed control systems typically are used within a single processing or generating plant or over a small geographic area, while SCADA systems typically are used for large, geographically dispersed operations. For example, a utility company may use a distributed control system to manage power generation and a SCADA system to manage its distribution. A SCADA system is generally composed of six components: instruments, operating equipment, local processors, short-range communication, host computers, and long-range communications. Instruments sense conditions such as pH, temperature, pressure, power level, and flow rate. Operating equipment includes pumps, valves, conveyors, and substation breakers that can be controlled by energizing actuators or relays. Local processors communicate with the site’s instruments and operating equipment. Local processors go by several different names, including programmable logic controller, remote terminal unit, intelligent electronic device, and process automation controller. A single local processor may be responsible for dozens of inputs from instruments and outputs to operating equipment. Local processors can collect instrument data; turn on and off operating equipment; translate protocols so different controllers, instruments, and equipment can communicate; and identify alarm conditions. Short-range communication consists of the relatively short cables or wireless connections that carry analog and discrete signals between the local processors and the instruments and operating equipment. The communication uses electrical characteristics such as voltage and current or other established industrial communications protocols. Host computers are the central point of monitoring and control. The host computer is where a human operator can supervise the process, receive alarms, review data, and exercise control. In some cases the host computer has logic programmed into it to provide control over the local processors. The host computer may be called the master terminal unit, the SCADA server, or a personal computer. Long-range communication consists of the communication between the local processors and host computers. This communication typically covers miles using methods such as leased phone lines, satellite, microwave, and cellular packet data. Figure 3 illustrates the major components of a SCADA system and Figure 4 illustrates how these components would be distributed in a typical water utility. These components can be adapted to perform specific functions in many industrial sectors. For example, the following graphic shows the application of these components in a water treatment and distribution system. Federal law and policies call for critical infrastructure protection activities to enhance the cyber and physical security of both public and private infrastructures that are essential to national security, national economic security, and national public health and safety. Federal policy designates certain federal agencies as lead points of contact for each key critical infrastructure sector (see table 1). Further, it assigns agencies responsibility for infrastructure protection activities in their assigned sectors and for coordination with other relevant federal agencies, state and local governments, and the private sector. In addition, federal policy establishes DHS as the focal point for the security of cyberspace— including analysis, warning, information sharing, vulnerability reduction, mitigation, and recovery efforts for public and private critical infrastructure information systems. To accomplish this mission, DHS is to work with other federal agencies, state and local governments, and the private sector. Several key federal plans focus on securing critical infrastructure control systems. The cyberspace strategy calls for DHS and DOE to work in partnership with industry to develop best practices and new technology to increase the security of critical infrastructure control systems, to determine the most critical control systems-related sites, and to develop a prioritized plan for short-term cybersecurity improvements for those sites. In addition, DHS’s National Infrastructure Protection Plan specifically identifies control systems as part of the cyber infrastructure, establishes an objective of reducing vulnerabilities and minimizing severity of attacks on these systems, and identifies programs directed at protecting control systems. Further, in May 2007, the critical infrastructure sectors issued sector-specific plans to supplement the National Infrastructure Protection Plan. Twelve sectors, including the chemical, energy, water, information technology, postal, emergency services, and telecommunications sectors, identified control systems within their respective sectors. Of these, most identified control systems as critical to their sector and listed efforts under way to help secure them. Critical infrastructure control systems face increasing risks due to cyber threats, system vulnerabilities, and the potentially serious impact of an attack as demonstrated by reported incidents. Cyber threats can be unintentional or intentional, targeted or nontargeted, and can come from a foreign, domestic, or inside source. Control systems can have vulnerabilities that make them susceptible to cyber attacks, including the increased connectivity of control systems to other systems and the Internet. Further, based on past events, the impact of a control systems incident on a critical infrastructure could be substantial. Cyber threats can be unintentional and intentional, targeted or nontargeted, and can come from a variety of sources. Unintentional threats can be caused by software upgrades or maintenance procedures that inadvertently disrupt systems. Intentional threats include both targeted and nontargeted attacks. A targeted attack is when a group or individual specifically attacks a critical infrastructure system. A nontargeted attack occurs when the intended target of the attack is uncertain, such as when a virus, worm, or malware is released on the Internet with no specific target. There is increasing concern among both government officials and industry experts regarding the potential for a cyber attack on a national critical infrastructure, including the infrastructure’s control systems. The Federal Bureau of Investigation has identified multiple sources of threats to our nation’s critical infrastructures, including foreign nation states engaged in information warfare, domestic criminals and hackers, and disgruntled employees working within an organization. Table 2 summarizes those groups or individuals that are considered to be key sources of threats to our nation’s infrastructures. Control systems are vulnerable to flaws or weaknesses in system security procedures, design, implementation, and internal controls. When these weaknesses are accidentally triggered or intentionally exploited, they could result in a security breach. Vulnerabilities could occur in control systems’ policies, platform (including hardware, operating systems, and control system applications), or networks. Federal and industry experts believe that critical infrastructure control systems are more vulnerable today than in the past. Reasons include the increased standardization of technologies, the increased connectivity of control systems to other computer networks and the Internet, insecure connections, and the widespread availability of technical information about control systems. Further, it is not uncommon for control systems to be configured with remote access through either a dial-up modem or over the Internet to allow remote maintenance or around-the-clock monitoring. If control systems are not properly secured, individuals and organizations may eavesdrop on or interfere with these operations from remote locations. Reported attacks and unintentional incidents involving critical infrastructure control systems demonstrate that a serious attack could be devastating. Although there is not a comprehensive source for incident reporting, the following attacks, reported in government and media sources, demonstrate the potential impact of an attack. Worcester air traffic communications. In March 1997, a teenager in Worcester, Massachusetts, disabled part of the telephone network using a dial-up modem connected to the system. This disabled phone service to the airport control tower, airport security, the airport fire department, the weather service, and the carriers that use the airport. Also, the tower’s main radio transmitter and another transmitter that activates runway lights were shut down, as well as a printer that controllers use to monitor flight progress. The attack also disrupted phone service to 600 homes in a nearby town. Maroochy Shire sewage spill. In the spring of 2000, a former employee of an Australian organization that develops manufacturing software applied for a job with the local government, but was rejected. Over a 2- month period, this individual reportedly used a radio transmitter on as many as 46 occasions to remotely break into the controls of a sewage treatment system. He altered electronic data for particular sewerage pumping stations and caused malfunctions in their operations, ultimately releasing about 264,000 gallons of raw sewage into nearby rivers and parks. Los Angeles traffic lights. According to several published reports, in August 2006, two Los Angeles city employees hacked into computers controlling the city’s traffic lights and disrupted signal lights at four intersections, causing substantial backups and delays. The attacks were launched prior to an anticipated labor protest by the employees. In addition, the following incidents illustrate the consequences of nontargeted attacks and unintentional incidents on critical infrastructure control systems. According to experts, incidents such as these could also be triggered by a targeted attack. CSX train signaling system. In August 2003, the Sobig computer virus was blamed for shutting down train signaling systems throughout the East Coast of the United States. The virus infected the computer system at CSX Corporation’s Jacksonville, Florida, headquarters, shutting down signaling, dispatching, and other systems. According to an Amtrak spokesman, 10 Amtrak trains were affected. Train service was either shut down or delayed up to 6 hours. Davis-Besse power plant. The Nuclear Regulatory Commission confirmed that in January 2003, the Microsoft SQL Server worm known as Slammer infected a private computer network at the idled Davis-Besse nuclear power plant in Oak Harbor, Ohio, disabling a safety monitoring system for nearly 5 hours. In addition, the plant’s process computer failed, and it took about 6 hours for it to become available again. Northeast power blackout. In August 2003, failure of the alarm processor in the control system of FirstEnergy, an Ohio-based electric utility, prevented control room operators from having adequate situational awareness of critical operational changes to the electrical grid. This problem was compounded when the state estimating program at the Midwest Independent System Operator failed due to incomplete information on the electric grid. When several key transmission lines in northern Ohio tripped due to contact with trees, they initiated a cascading failure of 508 generating units at 265 power plants across eight states and a Canadian province. Zotob worm. In August 2005, a round of Internet worm infections knocked 13 of DaimlerChrysler’s U.S. automobile manufacturing plants offline for almost an hour, leaving workers idle as infected Microsoft Windows systems were patched. Zotob and its variations also caused computer outages at heavy-equipment maker Caterpillar Inc., aircraft maker Boeing, and several large U.S. news organizations. Taum Sauk Water Storage Dam failure. In December 2005, the Taum Sauk Water Storage Dam, approximately 100 miles south of St. Louis, Missouri, suffered a catastrophic failure, releasing a billion gallons of water. According to the dam’s operator, the incident may have occurred because the gauges at the dam read differently than the gauges at the dam’s remote monitoring station. Bellingham, Washington, gasoline pipeline failure. In June 1999, 237,000 gallons of gasoline leaked from a 16-inch pipeline and ignited an hour and a half later, causing three deaths, eight injuries, and extensive property damage. The pipeline failure was exacerbated by poorly performing control systems that limited the ability of the pipeline controllers to see and react to the situation. Harrisburg, Pennsylvania, water system. In October 2006, a foreign hacker penetrated security at a water filtering plant. The intruder planted malicious software that was capable of affecting the plant’s water treatment operations. The infection occurred through the Internet and did not seem to be an attack that directly targeted the control system. Browns Ferry power plant. In August 2006, two circulation pumps at Unit 3 of the Browns Ferry, Alabama, nuclear power plant failed, forcing the unit to be shut down manually. The failure of the pumps was traced to excessive traffic on the control system network, possibly caused by the failure of another control system device. As control systems become increasingly interconnected with other networks and the Internet, and as the system capabilities continue to increase, so do the threats, potential vulnerabilities, types of attacks, and consequences of compromising these critical systems. Critical infrastructure owners face both technical and organizational challenges in securing their control systems. Technical challenges— including control systems’ limited processing capabilities and their real- time operations—hinder infrastructure owners’ ability to implement traditional information security technologies and practices. Organizational challenges include the lack of a compelling business case to improve security and a reluctance to share information regarding incidents. According to industry experts, existing information security technologies and practices—such as strong user authentication and patch management—are generally not implemented in control systems due to several technical issues, including limited computational processing capabilities, the need for real-time operation, and the lack of consideration of cybersecurity in the original design of the system. These challenges are described here in more detail. Limited computational capabilities. Existing security technologies— such as authorization, authentication, encryption, intrusion detection, and filtering of network traffic and communications—require more bandwidth, processing power, and memory than control system components typically have. Controller stations are generally designed to do specific tasks, and they often use low-cost, resource-constrained microprocessors. In addition, passwords and other data from control systems are often transmitted in a plain, unencrypted format. Encrypting this data could overload the processing abilities of the control system. Need for real-time operations. Complex passwords and other strong password practices are not always used to prevent unauthorized access to control systems, in part because they could hinder the operator’s ability to respond rapidly during an emergency. As a result, according to security experts, weak passwords that are easy to guess, and shared and infrequently changed, are common in control systems. Some even use default passwords or no password at all. Design limitations. Historically, control systems vendors did not design their products with security in mind, although recently vendors have begun including more security-related features in their products. In addition, although modern control systems are based on standard operating systems, they are typically customized to support control system applications. Consequently, software patches may either be incompatible with the customized version of the operating system or difficult to implement without compromising service by shutting down “always-on” systems or affecting interdependent operations. Table 3 illustrates the technical challenges in securing control systems by contrasting them with conventional information technology (IT) systems. In addition to the technical challenges of securing control systems, critical infrastructure owners face organizational challenges in securing control systems, including difficulty in developing a compelling business case for improving control systems security, a reluctance to share information on control system incidents (which could help build a business case), and the division of technical responsibilities within an organization. Experts and industry representatives reported that organizations may be reluctant to devote resources to securing control systems. These resources include money, personnel, training, and the early replacement of equipment that may have been originally designed to last 20 years or more. Until industry users of control systems have a business case to justify why additional security is needed, there may be little market incentive for the private sector to develop and implement more secure control systems. Another challenge is the reluctance to share information on control systems incidents and the resulting lack of attention to this risk. While incidents and attacks on critical infrastructure control systems have occurred, to date there is no authoritative, centralized process for collecting and analyzing information about control systems incidents. Experts we interviewed stated that companies are reluctant to share details of incidents due to factors such as legal liability and impact on their reputation. Several experts stated that they believed incidents were occurring, but are not being reported by industry. One expert suggested that since there have been no reports of significant disruptions caused by cyber attacks on U.S. control systems, industry representatives may believe the threat of such an attack is low. We have previously recommended that the government work with the private sector to improve the quality and quantity of information being shared among industries and government about attacks on the nation’s critical infrastructures. Another challenge involves the way security responsibilities are structured within organizations that use control systems. Several experts and industry representatives stated that two separate groups often have responsibility for securing control systems: (1) IT security personnel and (2) control system engineers and operators. IT security personnel focus on securing enterprise systems, while control system engineers and operators focus on the reliable performance of their control systems. Because each has a different focus, the two groups face challenges in collaborating to implement secure control systems. For example, IT security personnel may be unaware of the special requirements of a control system and the control systems personnel may be unaware of the full range of security technologies that may be available. Certain challenges are inherent to control systems. However, according to experts, many of these challenges can be addressed by both the private and public sectors through proper implementation of existing technology, development of new technologies, and implementation of organizational policies and procedures and training. Industry-specific organizations in various sectors, including the electricity, chemical, oil and gas, and water sectors, have initiatives under way to help improve control system security. These initiatives include developing standards, publishing guidance, and hosting workshops. The electricity system of the United States and Canada has more than $1 trillion in asset value, more than 200,000 miles of transmission lines, and more than 800,000 megawatts of generating capability serving over 300 million people. The effective functioning of this infrastructure is highly dependent on control systems. As a result, private sector organizations in the electricity sector have several activities under way related to control systems security, including establishing mandatory reliability standards, developing guidelines for compliance with these standards, hosting workshops, and other activities. See table 4 for a description of key control systems security initiatives in the electricity sector. performs research on policies and procedures for securing control systems, but has not been able to develop security technology for control systems given current funding levels (the institute’s security research has included various reviews of SCADA systems, determining how to secure certain products that are being used by the electric power industry, reviewing how a facility could recognize and recover from a control systems attack, and studying the use of wireless technology for SCADA systems and the inherent security risks); and has worked on control systems-related projects with the national laboratories, and has collaborated with DOE. For example, in 2006, the institute worked with the Pacific Northwest National Laboratory to identify the risks and vulnerabilities associated with using broadband communications for control systems and to develop mitigation strategies. According to a laboratory official, the institute and the laboratory are currently working on a project on electric power utilities’ use of wireless technologies. The project is to produce two papers addressing best practices for wireless deployment in the electric sector, and guidelines for securing wireless networks, training personnel, and securely integrating wireless and wired networks. The institute is responsible for developing international standards for telecommunications, IT, and power generation products and services. The Institute of Electrical and Electronics Engineers has several working groups that address issues related to control systems security in the electric power industry. Some of these work groups are developing standards for defining, specifying, and analyzing control systems. For example, the institute is developing P1689, a standard for retrofitting cybersecurity to various communications links in a control system, and P1711, a cryptographic standard for the same links. The institute is also developing P1686, which will define the functions and features to be provided in substation intelligent electronic devices to accommodate critical infrastructure protection programs. The commission prepares and publishes international standards for all electrical, electronic, and related technologies. World Trade Organization agreements permit use of these standards in international trade. The commission’s Technical Committee 57 is working to develop standards for control systems and control system components of power transmission and distribution systems, including communications and end devices called remote terminal units. It is also establishing data and communications security and communications standards for substations. The commission’s Technical Committee 65 is chartered to produce standards in the area of industrial process measurement and control. Working Group 10 of the committee is developing commission standard 62443, which is a three-part standard that will address network and system cybersecurity of industrial process measurement and control systems. Control systems are used to monitor and control processes within the chemical industry. A $460 billion critical infrastructure sector, the chemical industry contributes nearly 3 percent of the U.S. gross domestic product and generates 6.2 million jobs. Chemical reactors may use control systems to produce chemicals or regulate temperatures within the production process. The American Chemistry Council is a trade association that represents major companies in the U.S. chemical manufacturing sector. The council supports research and initiatives related to federal regulation on health, safety, security, and the environment. The council established a Chemical Sector Cyber Security Program in 2002 to facilitate implementation of the Chemical Sector Cyber Security Strategy. Updated in 2006, the strategy, as well as the Guidance for Addressing Cyber Security in the Chemical Industry, addresses manufacturing and control system security efforts and guidance on how to secure these systems. Further, within the cybersecurity program, the Manufacturing and Control Systems Security Work Team was developed to collect, identify, and facilitate the use of practices for securing manufacturing and control systems and to establish a network of manufacturing and control systems subject matter experts. The United States has more than 2 million miles of pipelines delivering oil and natural gas. In 2005, the consumption of natural gas totaled about 22,000 billion cubic feet, and in the United States, 20,802,000 barrels of petroleum were consumed per day. Both the gas and oil industries use control systems for process management and monitoring purposes. Employing integrated control systems, these industries can control the refining operations at a plant site, remotely monitor the pressure and flow of gas pipelines, and control the flow and pathways of gas transmissions. The sector-specific plan for the energy sector (which includes oil and gas) includes a discussion of selected control systems security efforts within the sector. The oil and gas sector has multiple control systems security activities under way, in particular, standards relating to security of control systems. See table 6 for a description of key control systems security efforts in the oil and gas sector. The water sector includes drinking water and water treatment systems. The sector’s infrastructures are diverse, complex, and distributed, ranging from systems that serve a few customers to those that serve millions. The sector includes about 150,000 water, wastewater, and storm water organizations; federal water offices at the national, regional, and state levels belonging to several agencies; some 100 state water agency organizations; and many other local government water organizations. Members of the water sector have worked with the Environmental Protection Agency on development of the Water Sector-Specific Plan, which includes some efforts on control systems security. Members of the water sector are also participating in the Process Control Security Forum’s activities. See table 7 for a list of key control system security initiatives by various organizations in the water sector. Other organizations are working on efforts to improve control systems security that are not sector-specific. The organization formerly known as the Instrumentation, Systems, and Automation Society, and now called ISA, is currently working on control systems security efforts, and InfraGard, a nonprofit organization associated with the Federal Bureau of Investigation, has recently started a control systems-related effort. See table 8 for a description of these initiatives. Over the past few years, federal agencies—including DHS, DOE, NIST, FERC, and others—have initiated efforts to improve the security of critical infrastructure control systems. However, DHS has not yet established a strategy to coordinate the various control systems activities across federal agencies and the private sector. Further, more can be done to address specific weaknesses in DHS’s ability to share information on control systems vulnerabilities. Until DHS develops an overarching strategy, there is an increased risk that the federal government and private sector will invest in duplicative initiatives and miss opportunities to learn from other organization’s activities. Further, until DHS addresses specific weaknesses in sharing information, there is an increased risk that the agency will not be able to effectively carry out its responsibility for sharing information on vulnerabilities, and that there could be a disruption to our nation’s critical infrastructures. There are many federal efforts under way to help improve the security of critical infrastructure control systems. For example, DHS is sponsoring multiple control systems security initiatives across critical infrastructure sectors, including a program to improve control systems cybersecurity that includes vulnerability reporting and response, activities to promote security awareness within the control systems community, and efforts to build relationships with control systems vendors and infrastructure asset owners. See appendix II for a detailed description of DHS’s key initiatives and projects involving control systems security. Additionally, DOE sponsors control systems security efforts within the electric, oil, and natural gas industries. These efforts include the National SCADA Test Bed Program, which funds testing, assessments, and training in control systems security and the development of a road map for securing control systems in the energy sector. Also, several of DOE’s national laboratories play an important role in implementing many DHS and DOE efforts and provide support directly to asset owners and vendors. For example, the national laboratories perform site assessments, test vendor equipment, and conduct outreach and awareness activities for infrastructure asset owners and vendors. See appendix III for more information on DOE’s initiatives. Other federal agencies, such as NIST and FERC, have also undertaken efforts to help secure control systems. For example, NIST is working with federal and industry stakeholders to develop standards, guidelines, checklists, and test methods to help secure critical control systems, while FERC is working to implement electricity reliability standards that address control systems. See appendix IV for more information on these and other initiatives. Several industry experts we spoke with stated that many federal programs in control systems security have been helpful. For example, experts stated that developing the road map was a positive step for the energy sector. An official who participated in the development of DOE’s road map stated that the process succeeded in identifying industry needs and was a catalyst for bringing agencies and government coordinating councils together and that it was a good idea for other industries to develop plans similar to the road map. In addition, experts we interviewed said the testing and site assessments conducted by the national laboratories for DHS and DOE made individual products more secure and helped improve overall attention to control systems security. However, the federal government does not yet have an overall strategy for guiding and coordinating control systems security efforts across the multiple agencies and sectors. To evaluate activities related to critical infrastructure protection, we developed a risk management framework for protecting critical infrastructures based on the standards and practices of leading organizations. The first phase of this framework is the development of a strategy that includes the goals, objectives, constraints, specific activities, milestones, and performance measures needed to achieve a particular end result. In 2004, we reported that federal agencies, standards organizations, and the private sector were leading various initiatives on control systems security, but lacked coordination and oversight to effectively improve the cybersecurity of the nation’s control systems. We recommended that DHS develop and implement a strategy for coordinating control systems security efforts among government agencies and the private sector. DHS agreed with our recommendation to develop a control systems security strategy and, in 2004, issued a strategy that focuses primarily on DHS’s initiatives. However, the strategy does not include ongoing work by DOE, FERC, NIST, and others. Further, it does not include the various agencies’ responsibilities, goals, milestones, or performance measures. Agency officials stated they have convened a federal working group that will develop a list of control systems security activities across the government. Further, in commenting on a draft of this report, DHS officials stated that this baseline list of activities will serve as the foundation for a comprehensive strategy across the public and private sectors. However, they did not provide a date for when the baseline and the comprehensive strategy would be completed. In addition, they did not state whether the list or the strategy would include responsibilities, goals, milestones, or performance measures. Until DHS develops an overarching strategy that delineates various public and private entities’ roles and responsibilities and uses it to guide and coordinate control systems security activities, the federal government and private sector risk investing in duplicative activities and missing opportunities to learn from other organization’s activities. DHS is responsible for sharing information with critical infrastructure owners on control systems vulnerabilities, but faces challenges in doing so. In 2006, DHS developed a formal process for managing control systems vulnerabilities reported to the U.S. Computer Emergency Readiness Team (US-CERT). DHS gathers this information and works with vendors and others to identify mitigation strategies. It then releases this information to critical infrastructure owners and operators, control systems vendors, and the public. However, DHS’s sharing of sensitive information on control systems to date has been limited. As of June 2007, US-CERT has issued only nine notices related to control systems security since the inception of the control systems security program in 2003. DHS’s information sharing is limited in part because of reluctance by those in the private sector to inform the agency of vulnerabilities they have identified and in part because of weaknesses in DHS’s ability to disseminate potentially sensitive information to the private sector. We previously reported on difficulties DHS has had in collecting information from, and sharing it with, the private sector. Industry officials stated that they are reluctant to share information about incidents because of uncertainties about how the information will be used and the value of reporting such incidents. In addition, DHS lacks a rapid, efficient process for disseminating sensitive information to private industry owners and operators of critical infrastructures. An agency official noted that sharing information with the private sector can be slowed by staff turnover and vacancies at DHS, the need to brief agency and executive branch officials and congressional staff before briefing the private sector, and difficulties in determining the appropriate classification level for the information. DHS’s control systems security program manager acknowledged the need to share information more quickly. In commenting on a draft of this report, DHS officials stated that after the start of our review, the agency began developing a process to formalize and improve information sharing. However, this process was not evident during our review. Further, DHS did not provide evidence of this process or examples of how the process had actually been used to share information. Until DHS establishes an approach for rapidly assessing the sensitivity of vulnerability information and disseminating it—and thereby demonstrates the value it can provide to critical infrastructure owners—the agency’s ability to effectively serve as a focal point in the collection and dissemination of sensitive vulnerability information will continue to be limited. Without a trusted focal point for sharing sensitive information on vulnerabilities, there is an increased risk that attacks on control systems could cause a significant disruption to our nation’s critical infrastructures. Control systems are an essential component of our nation’s critical infrastructure. Past incidents involving control systems, system vulnerabilities, and growing threats from a wide variety of sources highlight the risk facing these systems. The public and private sectors have begun numerous activities to improve the cybersecurity of these systems. However, the federal government lacks an overall strategy for coordinating public and private sector efforts. DHS also lacks an efficient process for sharing sensitive information on vulnerabilities with private sector critical infrastructure owners. Until an overarching strategy is in place, public and private sectors risk undertaking duplicative efforts. Also, without a streamlined process for advising private sector infrastructure owners of vulnerabilities, DHS is unable to fulfill its responsibility as a focal point for disseminating this information. If key vulnerability information is not in the hands of those who can mitigate its potentially severe consequences, there is an increased risk that attacks on control systems could cause a significant disruption to our nation’s critical infrastructures. To improve federal government efforts to secure control systems governing critical infrastructure, we recommend that the Secretary of the Department of Homeland Security implement the following two actions: develop a strategy to guide efforts for securing control systems, including agencies’ responsibilities, as well as overall goals, milestones, and performance measures, and establish a rapid and secure process for sharing sensitive control system vulnerability information with critical infrastructure control system stakeholders, including vendors, owners, and operators. We received comments via e-mail on a draft of this report from DHS officials, including the Deputy Director of the National Cyber Security Division. In the comments, agency officials neither agreed nor disagreed with our recommendations. Instead, they stated that DHS would take the recommendations under advisement. Additionally, officials stated that the agency has recently begun working with its partners in the Federal Control System Security Working Group to establish a baseline of ongoing activities. This baseline is to serve as a foundation for developing a comprehensive strategy that will encompass the public and private sectors, set a vision to secure control systems, describe roles and responsibilities, and identify future requirements for resources and action. Moreover, officials stated that the agency has recently developed a process to formalize the sharing of sensitive information related to control systems vulnerabilities. The officials reported that this process describes the information flow from vulnerability discovery, to validation, public and private coordination, and outreach and awareness. Further, it identifies the deliverables and outcomes expected at each step in the process. While DHS’s intention to develop a comprehensive public/private strategy is consistent with our recommendation, the agency did not provide a date by which this strategy will be completed. Until DHS completes the comprehensive strategy, the public and private sectors risk undertaking duplicative efforts. Additionally, while DHS officials stated that the agency had developed a process for sharing sensitive information on control system vulnerabilities, it did not have such a process in place during our review. Further, the agency has not provided evidence of its process for sharing control system vulnerability information or evidence that this process has been used to share information. Until such a process is formalized and implemented, key vulnerability information may not be available to those who can mitigate its potentially severe consequences, therefore increasing the risk that attacks on control systems could cause a significant disruption to our nation’s critical infrastructures. DHS officials and officials from other agencies who contributed to this report provided technical comments, which we have incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Secretary of the Department of Homeland Security, and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at www.gao.gov. If you have any questions on matters discussed in this report, please contact Dave Powner at (202) 512-9286 or Keith Rhodes at (202) 512-6412, or by e-mail at pownerd@gao.gov and rhodesk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objectives were to (1) determine cyber threats, vulnerabilities, and the potential impact of attacks on critical infrastructure control systems; (2) determine the challenges to securing critical infrastructure control systems; (3) identify private sector initiatives to strengthen the cybersecurity of control systems; and (4) assess the adequacy of public sector initiatives to strengthen the cybersecurity of control systems. To determine the cyber threats, vulnerabilities, and the potential impact of attacks on critical infrastructure control systems, we reviewed prior GAO reports on control systems as well as reports prepared by other government agencies and private organizations, including documentation of prior control system security incidents. We conducted interviews with individuals in the private sector, including representatives of private companies that operate control systems. These individuals were selected based on their knowledge of and participation in both private and public sector control system security activities. We also met with representatives from trade associations and federal agencies. On the basis of the information and documentation we received from these individuals, and information we collected during site visits to three of the national laboratories, we were able to compile information on the cyber threats, vulnerabilities, and the potential impact of attacks on critical infrastructure control systems. To determine the challenges to securing critical infrastructure control systems, we reviewed prior GAO reports and testimonies and materials written by other public and private organizations on control systems security, critical infrastructure protection, and national preparedness. We conducted interviews with experts and industry representatives, including managers of federal control systems programs at the Department of Homeland Security (DHS) and Department of Energy (DOE), experts from the national laboratories, vendors, owners and operators, and standards and trade associations. To identify the private sector initiatives to strengthen cybersecurity of control systems, we researched current standards and accepted trade practices and analyzed current efforts to better secure control systems. We spoke to private sector owners and operators, vendors, trade associations, industry experts, and standards associations. These organizations included the North American Electric Reliability Corporation (NERC), the American Gas Association, and ISA. To assess the adequacy of public sector initiatives to strengthen the cybersecurity of control systems, we researched relevant federal laws and regulations and initiatives by federal agencies to better secure control systems, and reviewed documentation and project plans on federal control systems efforts. We also reviewed GAO’s prior work analyzing best practices from leading organizations and interviewed private sector and other experts in control systems security for their perspectives on federal efforts. We interviewed officials from federal agencies including DHS, DOE, the National Institute of Standards and Technology (NIST), and the Federal Energy Regulatory Commission (FERC). In addition, we visited three of the national laboratories that are leading control systems security research and outreach efforts. These labs were selected because of their extensive participation in DOE and DHS control systems security programs. We then compared the activities of federal agencies with best practices and the perspectives of experts. Our work was conducted from March 2007 to July 2007 at agencies’ headquarters in Washington, D.C., and at national laboratories in Idaho, New Mexico, and Washington state in accordance with generally accepted government auditing standards. DHS supports multiple control systems security initiatives across government and the private sector. Table 9 lists key initiatives and projects conducted by DHS in control system security. Since 2003, the Department of Energy’s Office of Electricity Delivery and Energy Reliability has led control systems security efforts within the electric, oil, and natural gas industries by establishing the National SCADA Test Bed Program and developing a 10-year strategic framework for securing control systems in the energy sector. DOE’s national laboratory facilities also play an important role in control systems security research. In particular, the Idaho National Laboratory, Sandia National Laboratories, and the Pacific Northwest National Laboratory lead key efforts in control systems security research for DOE, DHS, and other public and private organizations. In 2004, DOE launched the National SCADA Test Bed Program, a multilaboratory effort to identify control systems vulnerabilities, conduct control systems research and development, and provide cybersecurity training and outreach to industry. The test bed program includes five DOE laboratories and has a budget of $10 million for fiscal year 2007. To date, the test bed program has completed 12 control systems vulnerability assessments in cooperation with control systems vendors and energy sector owners and operators. As a result of these assessments, the test bed team has provided vendors with recommendations to improve control systems security, and owners and operators with strategies for mitigating existing system security risks. The test bed program also has 10 ongoing control systems research and development projects that are peer-reviewed biannually to ensure they meet the needs of the government and the end users. In addition to its testing and research efforts, the program has led training workshops on control systems security for over 1,500 industry personnel, and has established a working group to evaluate control systems security standards in the energy sector. In January 2006, DOE released the Roadmap to Secure Control Systems in the Energy Sector, a collaborative public-private strategy for securing control systems infrastructures over the next 10 years. Developed jointly by energy owners and operators, researchers, vendors, and the government, the road map links near-, mid-, and long-term security needs with four main goals: (1) measure and assess the current security posture; (2) develop and integrate protective measures; (3) detect intrusion and implement response strategies; and (4) sustain security improvements. The road map outlines the energy sector’s top control systems security concerns and existing mitigation efforts, and is serving as a model for other sectors to develop similar plans. For example, in January 2007, DHS’s National Infrastructure Advisory Council recommended that DHS and the sector-specific agencies develop plans using DOE’s road map as a model. DOE has used the road map to align its test bed projects with strategic goals. In addition, DOE has created an online road map that uses the strategic framework to track public and private sector control systems security projects. DOE owns 17 laboratories and research facilities around the country that play an important role in control systems security research. In particular, the Idaho National Laboratory, the Sandia National Laboratories, and the Pacific Northwest National Laboratory manage and conduct key efforts in control systems security research for DOE, DHS, and other public and private organizations. Using their research facilities, the laboratories are able to conduct work for DHS, DOE, and other organizations. The laboratories are able to use a number of unique research facilities to test control systems equipment. For example, Idaho National Laboratory operates its own electrical power transmission facility, which consists of 61 miles of high-voltage transmission lines, feeders, transformers, and independent substations (see fig. 5). According to laboratory officials, because portions of the transmission facility are easy to separate from the overall power grid, control systems equipment can be tested on the grid without fear of effects on the larger power grid. The Pacific Northwest National Laboratory has the Electricity Infrastructure Operations Center, which is a replica of a typical operations center used in the electric industry, with consoles, displays, hardware, and software that can be used for control of electricity transmission (see fig. 6). The center receives live transmission data from actual utility control systems, and is used as a platform for research, development, and demonstration. The national laboratories manage key efforts for DHS related to control systems security. For example, the Idaho National Laboratory is the lead laboratory to support and execute the DHS Control Systems Security Program. According to laboratory officials, the laboratories coordinate activities funded through DHS with those funded through the National SCADA Test Bed of the Department of Energy. For example, Idaho National Laboratory has conducted five vendor assessments and six site assessments using DHS funds and eight vendor assessments and four site assessments using DOE funds. Additionally, the Idaho, Pacific Northwest, and Sandia National Laboratories developed training for asset owners and operators. The Idaho National Laboratory has developed 4- and 8-hour classes on control systems security that it has given to approximately 1,500 industry personnel since 2005. In 2006, the Pacific Northwest National Laboratory developed online control systems security awareness training that has been published on US-CERT’s Web site. In 2007, Sandia National Laboratories developed training to educate owners and operators on how to effectively use red teaming to improve the security posture of their control systems. Further, the Idaho National Laboratory has worked with George Mason University and New York University to develop a draft master’s level course curriculum on critical infrastructure and control systems security. Under DOE’s National SCADA Test Bed Program, the national laboratories have worked both independently and collaboratively on performing vendor vulnerability assessments, conducting control systems research and development, and leading industry training and outreach. For example, between 2004 and 2007, the Idaho National Laboratory conducted assessments of eight different control systems for the electricity sector. According to laboratory officials, vendors provide the lab with the hardware, software, and training necessary to run the control system; this represents a $1 million to $1.5 million investment by the vendor. Largely on the basis of the results of these assessments, vendors have chosen to develop system patches, reconfigure system architectures, and build enhanced systems, which have been retested by the laboratory. Furthermore, according to an agency official, the results of the vendor assessments have helped inform other federal control systems efforts, such as the development of the control system self assessment tool. In addition, the Idaho National Laboratory has conducted four on-site control system assessments for electricity sector owners and operators. In addition to vendor assessments, the laboratories are engaged in 10 research projects that are to help industry stakeholders analyze control systems operations and improve the security and reliability of architectures for control systems. For example, the Pacific Northwest National Laboratory has developed a technology to encapsulate control systems communications between two devices with a unique identifier and authenticator. This technology enables the devices to verify that the communication has not been tampered with. Unlike comparable technologies for standard information technology (IT) systems, the authentication technology does not require substantial amounts of bandwidth or processing power. Importantly, this technology has the potential to be applied to both new systems and older control systems. In addition, the Idaho, Pacific Northwest, and Sandia National Laboratories are working on identifying vulnerabilities in the current communications protocol used between control centers, testing mitigation techniques, and, ultimately, assisting industry in implementing a secure version of the protocol. In addition to work for DHS and DOE, the laboratories have conducted control systems security work for other public and private organizations, including research, security assessments, and training. For example, the laboratories have performed security assessments of control systems for federal operators of critical infrastructure, including the Bureau of Reclamation, Tennessee Valley Authority, Bonneville Power Administration, and the Strategic Petroleum Reserve, as well as private sector utility companies. Moreover, the Pacific Northwest National Laboratory worked with the Nuclear Regulatory Commission and the Nuclear Energy Institute to develop a self-assessment methodology for nuclear plants to determine compliance with standards. In addition to DHS and DOE, multiple other federal agencies and entities are working to help secure critical infrastructure control systems. Initiatives undertaken by the Federal Energy Regulatory Commission, the National Institute of Standards and Technology, the Environmental Protection Agency, and others are described here. Under the Energy Policy Act of 2005, the Federal Energy Regulatory Commission (FERC) was authorized to (1) appoint an electricity reliability organization to develop and enforce mandatory electricity reliability standards, including cybersecurity, and (2) approve, remand, or require modification to each proposed standard. The agency may also direct the reliability organization to develop a new standard or modify existing standards. Both the agency and the reliability organization have the authority to enforce approved standards, investigate incidents, and impose penalties (up to $1 million a day) on noncompliant electricity asset users, owners, or operators. FERC has conducted several activities to begin implementing the requirements of the act. In July 2006, FERC certified the North American Electric Reliability Corporation (NERC) as the electric reliability organization. In December 2006, FERC released a staff assessment of NERC’s eight Critical Infrastructure Protection (CIP) reliability standards, which include standards for control systems security. FERC found that while the standards were a good start, there were a number of items that required improvement, including ambiguous language for standards requirements, measurability, and degrees of compliance; insufficient technical requirements to ensure grid reliability; and the use of “fill-in-the- blank standards,” which are not enforceable. NERC agreed that the standards represented a starting point and has proposed a work plan to address the deficiencies. In July 2007, FERC issued a notice of public rulemaking in which it proposed to approve eight CIP reliability standards while directing NERC to modify the areas of these standards that require improvement. After considering public comments on the notice of public rulemaking, which are due in late September 2007, FERC plans to issue its final rule on the CIP reliability standards. The National Institute of Standards and Technology (NIST) is working with federal and industry stakeholders to develop standards, guidelines, checklists, and test methods to help secure critical control systems. For example, NIST is currently developing guidance for federal agencies that own or operate control systems to comply with federal information system security standards and guidelines. The guidance identifies issues and modifications to consider in applying information security standards and guidelines to control systems. Table 10 lists key NIST efforts. The Environmental Protection Agency (EPA) assisted DHS in developing a control systems self-assessment tool, a software program that assists owners and operators in identifying control systems vulnerabilities and mitigation strategies for addressing these vulnerabilities. EPA began work on a water security assessment tool in response to the Public Health Security and Bioterrorism Preparedness and Response Act of 2002, which required the agency to conduct vulnerability assessments of community water systems serving more than 3,300 individuals. EPA’s preliminary work in this area served as the foundation for DHS’s Control Systems Cyber Security Self Assessment Tool project. The agency initially launched the tool within the water sector in July 2007. In addition, EPA actively participates in control systems security information sharing activities through the Water Information Sharing and Analysis Center and DHS’s Homeland Infrastructure Threat and Risk Analysis Center, and has been involved with control systems standards development efforts. The Federal Bureau of Investigation’s Cyber Crime division participates in DHS’s US-CERT program and coordinates with DHS’s National Cyber Security Division on general cybersecurity issues. According to an agency official, the Cyber Crime division is in the process of establishing a control systems work group within its Intelligence and Information Sharing group. In addition, since 1996, the bureau’s cyber division has sponsored InfraGard, a cooperative government and private sector program to exchange information about infrastructure threats and vulnerabilities. As previously mentioned, SCADAGard, a special interest group within InfraGard, is to be used to share information with control systems owners and operators who have been vetted by the bureau. The Nuclear Regulatory Commission has conducted several activities related to enhancing the cybersecurity of control systems. The commission, which has regulatory authority over nuclear power plant safety control systems, completed a cybersecurity self-assessment project with technical assistance from the Pacific Northwest National Laboratory in October 2004 and documented the results in two technical reports published in 2004 and 2005. According to agency officials, on the basis of the information in these reports, a nuclear industry task force developed NEI 04-04, Cyber Security Program for Power Reactors, to provide nuclear power reactor licensees a means for developing and maintaining effective cybersecurity programs at their sites. In December 2005, the commission’s staff accepted this document as an acceptable method for establishing and maintaining cybersecurity programs at nuclear power plants. In January 2006, the commission issued a revision to Regulatory Guide 1.152, Criteria for Use of Computers in Safety Systems of Nuclear Power Plants, which provides cybersecurity-related guidance for the design of nuclear power plant safety systems. In addition, the commission has initiated a rulemaking process providing security requirements for digital computer and communication networks, including systems that are needed for safety, security, or emergency response. The public comment period for this rulemaking closed in March 2007. According to agency officials, in May 2007, all nuclear plants had completed an inventory and assessment of their critical digital systems. Agency officials stated that the commission staff is planning to conduct oversight inspections after completion of ongoing security-related rulemaking that will clearly establish the requirements for nuclear power plant cybersecurity programs. In addition to those named above, Scott Borre, Heather A. Collins, Neil J. Doherty, Vijay D’Souza, Nancy Glover, Sairah Ijaz, Patrick Morton, and Colleen M. Phillips (Assistant Director) made key contributions to this report.
Control systems--computer-based systems that monitor and control sensitive processes and physical functions--perform vital functions in many of our nation's critical infrastructures, including electric power, oil and gas, water treatment, and chemical production. The disruption of control systems could have a significant impact on public health and safety, which makes securing them a national priority. GAO was asked to (1) determine cyber threats, vulnerabilities, and the potential impact of attacks on critical infrastructure control systems; (2) determine the challenges to securing these systems; (3) identify private sector initiatives to strengthen the cybersecurity of control systems; and (4) assess the adequacy of public sector initiatives to strengthen the cybersecurity of control systems. To address these objectives, we met with federal and private sector officials to identify risks, initiatives, and challenges. We also compared agency plans to best practices for securing critical infrastructures. Critical infrastructure control systems face increasing risks due to cyber threats, system vulnerabilities, and the serious potential impact of attacks as demonstrated by reported incidents. Threats can be intentional or unintentional, targeted or nontargeted, and can come from a variety of sources. Control systems are more vulnerable to cyber attacks than in the past for several reasons, including their increased connectivity to other systems and the Internet. Further, as demonstrated by past attacks and incidents involving control systems, the impact on a critical infrastructure could be substantial. For example, in 2003, a computer virus was blamed for shutting down train signaling systems throughout the East Coast and in 2006, a foreign hacker was reported to have planted malicious software capable of affecting a water filtering plant's treatment operations. Critical infrastructure owners face both technical and organizational challenges to securing control systems. Technical challenges--including control systems' limited processing capabilities, real-time operations, and design constraints--hinder an infrastructure owner's ability to implement traditional information technology security processes, such as strong user authentication and patch management. Organizational challenges include difficulty in developing a compelling business case for investing in control systems security and differing priorities of information security personnel and control systems engineers. Multiple private sector entities such as trade associations and standards setting organizations are working to help secure control systems. Their efforts include developing standards, providing guidance to members, and hosting workshops on control systems security. For example, the electricity industry has recently developed standards for cybersecurity of control systems and a gas trade association is developing guidance for members to use encryption to secure control systems. Federal agencies also have multiple initiatives under way to help secure critical infrastructure control systems, but more remains to be done to coordinate these efforts and to address specific shortfalls. Over the past few years, federal agencies--including the Department of Homeland Security, the Department of Energy, and the Federal Energy Regulatory Commission (FERC)--have initiated efforts to improve the security of critical infrastructure control systems. However, there is as yet no overall strategy to coordinate the various activities across federal agencies and the private sector. Further, DHS lacks processes needed to address specific weaknesses in sharing information on control system vulnerabilities. Until public and private sector security efforts are coordinated by an overarching strategy and specific information sharing shortfalls are addressed, there is an increased risk that multiple organizations will conduct duplicative work and miss opportunities to fulfill their critical missions.
Under its constitutional authority to regulate commerce with foreign nations, the Congress has enacted laws authorizing the President to enter into trade agreements with other countries to reduce tariff and nontariff barriers. One major recent law to provide this authority is the Bipartisan Trade Promotion Authority Act of 2002 (TPA). The TPA legislation sets forth U.S. trade negotiating objectives that apply to negotiating FTAs. However, the TPA legislation does not impose any specific criteria on the President for choosing FTA partners, except that the President must take into account the extent to which the negotiating partner has implemented or has accelerated implementation of its WTO obligations. Other trade legislation encourages pursuit of FTA negotiations. For example, in the 2000 African Growth and Opportunity Act, the Congress declared that FTAs should be negotiated with interested sub-Saharan African countries. Furthermore, in the United States-Caribbean Basin Trade Partnership Act, the Congress declared that it was the policy of the United States to seek the participation of Caribbean Basin beneficiary countries in the FTAA or another FTA, with the goal of achieving full participation in any such agreement by 2005. USTR, the President’s principal trade policy advisor and coordinator, has the lead responsibility for the formulation and coordination of trade policy; the negotiation of trade agreements, including FTAs; and the enforcement of trade agreements. Under the Trade Expansion Act of 1962, President John F. Kennedy established an interagency trade policy organization to be chaired by USTR to assist with these and other trade responsibilities. Currently, this organization consists of three tiers of committees, which from the lowest tier to the highest tier are the Trade Policy Staff Committee (TPSC), the Trade Policy Review Group (TPRG), and the National Security Council/National Economic Council (NSC/NEC). Within this framework, USTR coordinates with Commerce, Agriculture, State, and Treasury and other U.S. agencies as issues needing their expertise arise. The United States currently has five FTAs with six nations: Israel (1985), Canada (1989), Mexico (1994), Jordan (2001), Singapore (2003), and Chile (2003). The United States has already begun negotiating four more bilateral or subregional FTAs with Central America, the Southern Africa Customs Union (SACU), Australia, and Morocco. USTR has announced that it plans to negotiate FTAs with the Dominican Republic; Bahrain; Panama; and the Andean countries of Colombia, Peru, Ecuador, and Bolivia. In addition, in October 2003, the President announced the U.S.’s intent to negotiate an FTA with Thailand. Other countries are under consideration as FTA partners. For a general time line of U.S. FTAs since 1985, see figure 1. The factors used since the 2002 selection of FTA negotiating partners have evolved. According to the Trade Representative and other U.S. officials, the Trade Representative chose the first four FTA partners on the basis of his own evaluation of factors and after he had consulted the President and certain other high-level officials in several other agencies. Subsequently, the NSC coordinated the views of key trade agencies, which decided to use six factors in a revised interagency process to recommend proposed FTA partners to the President. The Trade Representative told us that his early FTA proposals emerged from his evaluation of 13 factors he developed over time—the same factors that the Trade Representative and other USTR officials continue to use. However, he cautioned that these factors “carry no coefficients”—that is, they do not have relative weights. The Trade Representative described the factors in some detail, with examples. Congressional guidance. According to the Trade Representative, his office consults with the Congress before and after FTA selection to ensure support and eventual congressional approval. USTR officials also examine public support, including the ethnic components of such support. Business and agricultural interest. The Trade Representative considers the views of business and agriculture and evaluates both current and future economic benefits of a potential FTA. Special product sensitivities. The Trade Representative assesses how an FTA will adversely affect certain sectors and products, such as textiles and sugar. Serious political will of the prospective partner to undertake needed trade reforms. The Trade Representative considers the political will in the foreign country to enact and implement trade reforms. He also assesses the country’s trade capabilities and the candidate’s track record in meeting current trade obligations. Willingness to implement other reforms. The Trade Representative stated that FTAs are a development tool that may help promote other economic reforms. The United States views these reforms as links to market-oriented economic development and future growth. Prospective FTA partners are expected to show serious intention in this regard to ensure that they understand (1) how important it is to make this commitment to reform and (2) the extent of the obligations that a comprehensive FTA with the United States involves. Commitment to WTO and other trade agreements. USTR considers a potential FTA partner’s commitment to the trade disciplines in the WTO and the commitments being discussed at the ongoing FTAA negotiations. Contribution to regional integration. The United States has put in place initiatives to advance U.S. goals on a regional basis and foster regional economic integration. The Trade Representative told us that the Central American Free Trade Agreement (CAFTA)—Costa Rica, El Salvador, Guatemala, Honduras, and Nicaragua—and Chile FTAs have the potential to help integrate the whole region by helping to enact and implement the FTAA. Similarly, the SACU FTA may also help the integration of these five African countries (South Africa, Botswana, Lesotho, Namibia, and Swaziland). Support of civil society groups. The Trade Representative highlighted the views of labor and environmental groups as important components of FTA selections because these views affect prospects of congressional passage. Cooperation in security and foreign policy. The Trade Representative considers the extent to which potential partners are willing to support U.S. security and foreign policy objectives. For example, Jordan, Morocco, and Bahrain support U.S. objectives in the Middle East, and the CAFTA nations supported U.S. objectives in Iraq. Need to counter FTAs that place U.S. commercial interests at a disadvantage. The Trade Representative is interested in negotiating FTAs that will offer U.S. commercial interests opportunities on a par with other countries that already have FTAs. (See app. II for a list of European Union and U.S. FTAs.) Need to do FTAs in each of the world’s major regions. The Trade Representative prefers to negotiate FTAs in each of the major regions of the world: Asia (Singapore, Australia, and Thailand); the Middle East (Jordan, Morocco, and Bahrain); Africa (SACU); and the Americas (CAFTA and the Dominican Republic). Need to ensure a mix of developed and developing countries. The Trade Representative also seeks FTAs with both developed and developing countries—for example, Australia and SACU. Developing countries are a key to trade growth because they account for a significant share of the world’s population and represent an important negotiating bloc in the WTO. Demand on USTR resources. The Trade Representative recognizes that the resources needed for FTA negotiations are not unlimited. As a result of discussions among relevant agencies, six factors now guide the discussions in selecting future FTA partners. Country readiness. Country readiness involves the country’s political will, trade capabilities, and rule of law systems. U.S. agencies involved in FTA partner selection discussions may interpret this factor somewhat differently, since each agency filters the information though the lens of its specific mission. For example, USTR may review a prospective candidate’s adherence to trade obligations and its leaders’ commitment to negotiating all trade issues that currently comprise the comprehensive FTAs that the United States seeks to negotiate. However, Treasury may look at the candidate’s overall macroeconomic stability and the strength of its financial and banking system. Economic/Commercial benefit. According to U.S. officials, the interagency group reviews the likely economic benefit to the United States. It assesses macroeconomic benefits (trade and investment potential) and the likely effects on specific products and sectors. (See app. III for potential and existing FTA partners’ share of total U.S. trade.) Benefits to the broader trade liberalization strategy. This factor relates to the prospective FTA partner’s overall support for U.S. trade goals. Other elements considered within this category are the potential FTA partner’s willingness to resolve trade problems through its participation in a Trade and Investment Framework Agreement with the United States, success in meeting its WTO obligations, and support of key U.S. positions in FTAA and WTO negotiations. Compatibility with U.S. interests. A potential FTA partner is examined for its compatibility with broad U.S. interests, including its support for U.S. foreign policy positions. One USTR official stated that sometimes a foreign leader’s visit can prompt serious discussions that lead to that country’s consideration as a future FTA partner. Likewise, the Trade Representative’s foreign travels also are important in bringing attention to a possible FTA with a particular country. However, other requirements, including but not limited to WTO membership and a Trade and Investment Framework Agreement, must still be met. Congressional/Private-sector support. Agencies also review the extent to which a particular FTA selection has garnered support from the Congress, business groups, and civil society. U.S. government resource constraints. This factor focuses primarily on constraints at USTR—what regional office is available to lead the negotiation, what staff are available, and how the timing may affect meeting postnegotiation TPA requirements. Other agencies’ resources also play a role in this discussion. In terms of how the six selection factors are applied, according to officials that we interviewed, the broad factors guide the discussion, but they are not hard-and-fast decision rules. Moreover, administration decision makers have not set thresholds for eligibility determinations. Key officials told us that USTR’s views are central but that the now-standard discussion of the factors permits each participating executive agency to contribute its perspective, thus potentially adding to issues that USTR needs to address in the future negotiations. For example, other agencies may be aware that a prospective partner has engaged in money laundering or human rights abuses or has been slow to resolve intellectual property disputes. As illustrated below, the FTA selections made to date in 2002-03 primarily reflect U.S. trade strategy, foreign policy, and foreign economic goals. (See app. IV for more details on specific FTA partners.) According to USTR, the administration is working aggressively on its “competitive liberalization” strategy, because it seeks to spur progress by creating a positive dynamic to liberalize trade on multiple levels: bilaterally, regionally, and multilaterally. USTR also reports that the U.S.’s willingness to pursue bilateral FTAs has bolstered countries’ interest and encouraged them to make the changes necessary to enter into FTA negotiations with the United States. Australia. This FTA negotiation represents the greatest immediate commercial benefit of any single ongoing FTA, with 1.2 percent of total U.S. trade in 2002. A U.S.-Australia FTA would add to the regional distribution of FTAs for the United States and would strengthen U.S. ties to a valued ally. The increased U.S. access to Australia’s market would likely increase trade in goods and services, enhance employment opportunities, and encourage additional two-way investment. Bahrain. Although Bahrain represents a small share of U.S. trade, an FTA with this U.S. ally and moderate Muslim nation would support U.S. security and political goals by fostering prosperity in the region. As a stepping-stone to an eventual Middle East Free Trade Area, Bahrain could become the hub of a subregional block of countries with closer trading relationships with the United States. An FTA with Bahrain might be completed relatively quickly due to Bahrain’s reform-minded outlook. Central American Free Trade Agreement. The commercial benefit of an FTA with five Central American countries would be 0.95 percent of total U.S. trade. In the United States-Caribbean Basin Trade Partnership Act, the Congress declared that it was the policy of the United States to seek the participation of Caribbean Basin beneficiary countries in the FTAA or another FTA, with the goal of achieving full participation in any such agreement by 2005. CAFTA would provide regional balance among FTAs and add to the momentum for the hemispherewide FTAA, a major U.S. trade priority. It would also help lock in and broaden reforms such as anticorruption and government accountability measures, support economic integration within the region, and enable the United States to increase exports and gain U.S. access to more affordable goods. Dominican Republic. If the Dominican Republic is added to the overall CAFTA region, it would bring the CAFTA trade from 0.95 percent to 1.32 percent of total U.S. trade in 2002, slightly more than that of Australia. The Dominican Republic had strong support in the Congress for its addition to the CAFTA negotiations, in part because excluding it from CAFTA could lead to adverse economic consequences in the Dominican Republic. However, according to a key participant in the discussion, the decision to add the Dominican Republic also included careful consideration of U.S. concerns about its protection of intellectual property rights and its status as one of the worst offenders on human trafficking. Morocco. Although a U.S.-Morocco FTA would have minimal trade benefit to the United States, one USTR official stated that this FTA would further the administration’s goal of promoting openness, tolerance, and economic growth across the volatile Middle East. Morocco, a moderate Muslim country, also signaled its readiness to enter into a comprehensive FTA by demonstrating its willingness to liberalize its economy and make domestic reforms. Southern Africa Customs Union. Responding to congressional guidance in the 2000 African Growth and Opportunity Act, USTR inititated FTA negotiations with SACU in November 2002. This FTA contributes to the U.S.’s desire for regional balance among FTAs, creates an opportunity for the United States to build trade capacity in the region, and strengthens SACU’s role as a negotiating partner in other trade forums, such as the WTO. The commercial benefit of this FTA represents 0.42 percent of total U.S. trade. The selection of FTA partners has evolved from a limited high-level consultation to a more systematic and deliberative process involving more U.S. officials. USTR keeps the Congress apprised of potential FTA partners and routinely considers the Congress’s views in making selections. Business and other nongovernmental groups have also provided their views to USTR on potential FTA partners and FTA negotiations. In February 2002, the Trade Representative made recommendations for potential FTA partners to a cabinet-level interagency group under the leadership of the NSC/NEC. According to agency officials, this interagency group informally assessed the proposed countries and offered a consensus recommendation to the President, who named the four FTAs that are currently under negotiation (Australia, CAFTA, Morocco, and SACU). We found no evidence that this group used decision papers on the potential partners to guide its deliberations. Nevertheless, some high-level U.S. officials we interviewed confirmed that they provided USTR and other key trade agencies with input at the time and were on board with the final selections. Other officials, however, expressed concern that the discussions of the four FTAs had been ad hoc and that they had not been able to provide important input. Also, in February 2002, the cabinet-level interagency group directed their deputies to make the process more systematic by formalizing the factors that would be used for assessing future FTA partners. The desire to have a more systematic interagency process for assessing partners was largely driven by the expected growth in the number of potential FTAs that would follow the enactment of the trade promotion authority legislation. In May 2003, the NSC/NEC issued guidelines on assessing potential FTA partners. In addition to identifying the factors to be used, the guidelines make the interagency process more inclusive by supporting the use of four standing interagency groups for in-depth deliberations. Each group in turn is to use decision papers to assess potential FTA partners and make recommendations for consideration at the next level, all the way up to the President. After the President selects an FTA partner, he is to notify the Congress, through USTR, at least 90 days before he intends to start FTA negotiations with the selected partner. USTR consults with the Congressional Oversight Group before sending its notification letter about a prospective FTA negotiation to the Congress. As shown in figure 2, the selection process is initiated by USTR and begins with the assessments of potential FTA partners by the TPSC and the TPRG. The TPSC is composed of senior officials from more than 19 U.S. agencies and departments who bring specialized technical knowledge on trade issues to the deliberations. The TPRG is composed of under secretaries or assistant secretaries and other senior officials from all of these U.S. agencies and departments who contribute policy perspectives on trade to the discussions. Although USTR leads and coordinates interagency discussions, other agencies are expected to play an important role in developing pertinent information and discussing the pros and cons of potential FTA partners. The next level of the process consists of the Deputies Committee and the Principals Committee, two interagency groups that the NSC/NEC lead and coordinate. The Deputies Committee is composed of the deputies from all the cabinet agencies involved in trade. The Principals Committee is composed of the secretaries from all of these agencies, such as the Trade Representative and the Secretaries of State and the Treasury. Deputies and Principals meet and use decision papers as needed to assess potential FTA partners before forwarding their recommendations to the President. USTR and other agencies used this new interagency process for the first time in assessing the Dominican Republic as a potential FTA partner in mid- 2003. Agency officials with whom we spoke expressed satisfaction that this process enabled their agencies to contribute to the assessment of potential FTA partners and strengthen the content of the decision papers. Nevertheless, because the process is new, it remains to be seen how it will continue to perform. Input from the Congress and the private sector is part of the process of selecting potential FTA partners and negotiating FTAs, according to USTR officials. Although the President is not specifically required to consult with the Congress before selecting potential FTA partners, USTR officials nevertheless stated that they keep the Congress apprised of the FTA partners under consideration through formal and informal means. According to these officials, the views of the Congress are very important to their agency and are seriously considered in FTA partner selections because the Congress must ultimately approve all FTAs. USTR gave us an extensive list of pertinent contacts between the agency and the Congress to confirm these discussions. As required by the TPA legislation, USTR has notified and consulted with the Congress about FTA negotiations. For instance, USTR has provided written notice to the Congress at least 90 days before initiating FTA negotiations since the passage of TPA. Few Members of Congress have openly questioned choices of FTA partners to date, and those Members that have raised questions still expressed broad support for the “competitive liberalization strategy.” Nevertheless, certain Members of Congress have urged USTR to give greater priority to economic and commercial conditions in selecting future FTA partners. Also, business and nongovernmental groups have given USTR their views on potential FTA partners and FTA negotiations. In late 2002, for instance, a major U.S. business group provided USTR with its views on potential FTA partners and on the factors that USTR and other U.S. agencies involved in trade should consider during the assessment of potential partners. Also, nongovernmental groups have provided input on FTA negotiations. However, representatives of some of these groups indicated that they were not sure whether USTR had seriously considered their comments. Despite the administration’s ambitious and growing FTA agenda, USTR and other agencies have made resource decisions without considering resource trade-offs among FTAs and other trade priorities. FTAs are resource intensive, and USTR has taken some measures to cope with resource constraints. Nevertheless, the administration continues to consider new FTAs. Present strategies for managing staff and other resources mean that newly announced FTA partners will have to wait to begin negotiations until other ongoing negotiations are concluded. Although resource constraints are now one of the factors taken into account when USTR and other agencies select FTA partners, these interagency discussions still leave gaps because they are not based on robust data and do not specify resource needs or commitments. The administration’s ambitious trade agenda has driven its resource decisions about FTAs and other trade priorities. Since the enactment of TPA in August 2002, the administration has stepped up its pursuit of bilateral and subregional FTAs as part of its overall strategy of competitive liberalization. As shown in figure 3, the United States now has numerous, simultaneous FTA negotiations under way, with ambitious target dates for completion. Although it took 2 years to negotiate two FTAs with relatively advanced partners (Chile and Singapore), USTR currently has FTAs under negotiation with four partners, three of which (Australia, Morocco, and CAFTA) are slated to be completed within 1 year. Negotiations for the fourth partner (SACU) will be conducted through 2004, as will negotiations for Bahrain. In addition, USTR officials hope to complete negotiations with the Dominican Republic in early 2004. The administration’s decisions to pursue these FTAs have been made with little formal consideration for potential resource trade-offs, even though the WTO and FTAA negotiations are scheduled to finish by January 1, 2005. As a result, USTR has had to deploy its resources in a reactive manner. According to agency officials, the four FTAs currently being negotiated were selected before any explicit resource decisions were made because USTR officials assumed that resources would be identified afterward to carry out these priority negotiations. According to USTR, in these cases the resources were “made to fit” the priorities. FTA negotiations require intensive effort on the part of USTR and other trade agencies such as Agriculture, Commerce, State, and Treasury. For example, our analysis of the U.S. negotiating team suggests that on average each of the six FTAs under negotiation in 2003 involved 11 percent of USTR’s 209 full-time staff. In addition, USTR estimates prepared for us show that the nonstaff costs of negotiating rounds in fiscal year 2003 were $1.7 million, of which approximately 68 percent were travel costs (see table 1). Moreover, FTA travel comprised 37 percent of USTR’s total travel costs in fiscal year 2003, and USTR estimates that it will constitute 42 percent of its total travel costs in fiscal year 2004. Although USTR takes the lead for all negotiating groups except financial services, it relies on other agencies, such as Agriculture, Commerce, State, and Treasury, for analysis, expertise, and staff to support its negotiations. For example, other trade agencies regularly provide staff on a nonreimbursable “detail” (loan) basis to USTR. USTR currently has more than 30 such detailees. In addition, of the 134 U.S. officials present for the first five rounds of the Australia FTA negotiations, 22 were from USTR and the rest (112) came from other agencies. In fact, table 2 shows that other agencies comprised an average of 76 percent of all members of U.S. FTA negotiating teams. However, while table 2 conveys the wide range of officials who are part of an FTA negotiating team, it does not capture “staff effort” to support the team because none of the agencies involved routinely tracks staff time devoted to FTA negotiations, and only one agency was able to produce estimates for us. According to USTR officials, nearly all USTR staff are involved in each FTA before, during, or after negotiating sessions. One USDA official said its delegates to the negotiating team were just the tip of the iceberg because many other people at Agriculture were involved in providing complex analyses during the negotiations. Commerce data prepared for us (see table 3) show that a large number of staff support FTAs, but their total staff hours translate into fewer full-time equivalents. The conclusion of negotiations does not mean that the work is completed on a given FTA. Additional demands, such as legal checks and translation activities, continue. For example, USTR officials reported that negotiations in the Americas have been slowed because of follow-up work after the signing of the Chile FTA. The increase in the number of FTAs is also likely to result in higher implementation-related needs, such as monitoring, enforcement, and dispute resolution. Our prior work has highlighted concerns about the increasing monitoring and enforcement workload at trade agencies, and USTR estimates that every three additional disputes require an additional legal specialist. USTR’s approach to dealing with resource constraints is sequencing one set of negotiations per region at a given time in order to leverage the expertise of its negotiators. As a result, as depicted in figure 4, USTR’s Office for the Americas will not start negotiating with the Dominican Republic until after the CAFTA negotiations have been completed. Similarly, although Bahrain was ready to begin negotiating immediately with the United States, USTR’s Office of Europe and the Mediterranean will postpone those negotiations until the completion of negotiations with Morocco. USTR has indicated that it will continue to schedule negotiations in each region after the current set of FTAs is completed. Thus, regional negotiators will remain fully occupied, and the queue of countries waiting to negotiate with the United States will likely grow. In addition, USTR officials reported that they are using past agreements as a template for the ongoing negotiations. This strategy has progressed to the point that USTR now believes it can save resources by having countries accede to already negotiated FTAs. This process, called “docking,” means that negotiators will not have to spend time renegotiating every area. For example, USTR officials stated that the Dominican Republic will be integrated into the U.S.-CAFTA FTA and that only market access issues should require separate, detailed negotiations. Although CAFTA will require 1 year to complete, USTR expects that docking the Dominican Republic onto the agreement will take considerably less time. USTR is also considering how to integrate separate FTAs as it works toward a U.S.- Middle East Free Trade Area. USTR is taking other measures to save resources. For example, USTR officials noted that they regularly combine various missions in one trip abroad and that they use extensive teleconferencing. In addition, USTR officials reported that they have cut costs by holding meetings in a central location and conducting negotiations in English when possible to avoid interpretation expenses. USTR is also improving its system for tracking TPA requirements for each FTA. To facilitate interagency collaboration, USTR developed a negotiations calendar listing the various bilateral, regional, and multilateral negotiating rounds so that negotiators may better identify competing demands. Finally, concerns that the FTA agenda would continue to be busy led to resource constraints’ inclusion as a factor used for FTA partner selection during the interagency process. This step represents an improvement over the past situation, in which no formal discussion of resource constraints or trade-offs preceded FTA partner selection even though USTR and other trade agencies already faced human capital challenges. As a result, resource constraints are now a standard part of interagency FTA partner selection discussions. One official welcomed this development because it has enabled assumptions regarding resource allocations to be made ahead of time and for consideration to be given to how resources are currently devoted to ongoing bilateral and regional efforts. Despite USTR’s efforts to better manage resource constraints, important gaps remain. For example, decisions about staffing and funds for FTA negotiations lack formal data and systematic consideration of their likely impact on other trade priorities. Moreover, USTR is continuing to make specific requests for resources from other agencies on a case-by-case basis, after FTA partners are selected, making it difficult for these agencies to do their own resource planning for FTAs. USTR’s resource data are not sufficiently robust for resource planning, and this limits USTR’s flexibility in meeting its resource needs. When assigning resources for the current set of FTAs, USTR officials did not have clear data on hand regarding what was needed and what resources were available. We reported in 2002 that valid and reliable data are critical to assessing an agency’s workforce requirements and to heighten an agency’s ability to manage risk by allowing managers to (1) spotlight areas for attention before crises develop and (2) identify opportunities for enhancing agency results. In 2003, we also noted the importance of considering human capital challenges by relying on valid and current data and reported that the absence of such data can seriously undermine efforts to respond to current and emerging challenges. USTR has indicated that it is developing a new system for tracking spending according to different trade priorities, including FTAs, but this system is not yet operational. In addition, although staff time is a major resource devoted to FTAs, USTR officials informed us that they have no plans to track the time staff spend working on FTAs. The importance of systematic data and planning can be seen in the constraint imposed by limited numbers of functional experts, who focus on areas such as intellectual property rights, agriculture, and market access. These experts are often needed to support multiple, concurrent negotiations. However, the offices in which these staff work at USTR average only eight people each, so they often represent a limiting factor to completing FTA negotiations. USTR officials reported that they make many resource management decisions informally on an ongoing basis, in addition to those decisions based on advance planning that took into account the U.S.’s various trade priorities. For example, although regional assistant U.S. trade representatives provide staff and travel estimates as part of the annual budget cycle, they frequently bring specific resource requests to USTR management throughout the year. USTR officials, who must mediate among these often competing priorities, told us that they looked at several factors—for example, negotiating deadlines and the need for specific expertise—to make these resource decisions. If there were competing demands for staffing for the Morocco and SACU negotiations, for example, USTR management might consider that the need would be more pressing for the Morocco negotiator because of that negotiation’s shorter deadline for completion (e.g., the end of 2003 versus the end of 2004 for SACU). If USTR managers identify a lack of available staff to cover certain issues, they then turn to other agencies to supplement their own staff. This informal, reactive approach may no longer be adequate to meet the needs of increasing numbers of negotiations, particularly if the U.S.’s trade strategy shifts to an emphasis on bilateral agreements in the wake of the failed Cancun ministerial of the WTO. Moreover, this approach also affects resource management at other trade agencies. The revised interagency process has not made requesting and securing staff from other agencies more systematic because participants do not address specific staffing needs or other cost estimates in detail in the formal interagency meetings, such as the TPSC and the TPRG, that are used to select FTA partners. Instead, the discussion of resource constraints focuses more on matters like timing for multiple FTAs in the same region, such as Morocco and Bahrain. Specific requests for and commitments of resources by other agencies still occur after FTA partners are selected. According to USTR, after an FTA partner country is selected, the Trade Representative’s office asks the assistant U.S. trade representatives for a listing of officials at USTR and other agencies they propose to constitute the U.S. negotiating team. On the basis of these lists, USTR managers report that they talk to their respective counterparts at other agencies regarding USTR’s needs. These discussions generally begin just before USTR notifies the Congress about the FTA and are ongoing thereafter as negotiations get under way. According to USTR, the ad hoc nature of these requests is due in part to USTR’s varying needs for different agencies’ involvement, depending on the topics being negotiated and the changing requirements over time. USTR’s reactive method for requesting staff from other agencies makes it difficult for their own resource planning. Commerce officials, for example, noted that the department had much less notice about FTA staffing needs than it did about the need for staff support during NAFTA negotiations. Agencies report that they were generally able to comply with USTR’s requests but noted that the requests sometimes strained their resources. At times it was necessary for agencies to make trade-offs, if the same person was requested for concurrent negotiations, agency officials told us. According to Treasury officials, they have had to “perform triage” on some operations due to the heavy FTA workload. Other agencies also noted the burden of travel costs. Although agencies continue to respond to USTR’s informal method of requesting resources, it is unclear how well this system will continue to function in light of the intensifying FTA agenda. After selecting the first several FTA partners with limited interagency consultation, the administration has adopted a more rigorous and inclusive process to implement its FTA agenda. This framework for interagency discussions appears to be promoting fuller deliberations and wider involvement in the FTA partner selections. However, other management challenges remain. In particular, USTR and other agencies have reported that FTA negotiations are already straining available resources. Several steps have since been taken to deal with resource constraints associated with FTAs. However, present mechanisms still leave important gaps because they do not involve systematic data or interagency resource planning. As the United States sets its sights on more bilateral agreements, especially in light of the breakdown of the September 2003 Cancun negotiations, the importance of managing trade priorities at USTR and other trade agencies becomes increasingly significant. Managing resources, especially across diverse agencies, is paramount in meeting the competing demands of a complex and intensifying U.S. trade agenda. In light of USTR’s limited resources and management systems to track those resources, we recommend that the Office of the U.S. Trade Representative work with other key trade agencies to develop more systematic data and plans for allocating staff and resources across the full U.S. trade agenda, including FTAs and other negotiating priorities. We provided a draft of this report to the departments of State, Commerce, Agriculture, and the Treasury. State and Treasury did not provide comments. We received written comments on a draft of this report from the U.S. Trade Representative (see app. V). USTR, Commerce, and Agriculture also provided technical comments, which we incorporated in the report as appropriate. In his response to our draft report, the Trade Representative emphasized the administration’s competitive liberalization strategy and the role of FTAs in the strategy. He laid out the steps his office is taking to promote liberalized trade and described what the administration is doing in several regions throughout the world. He referred to resource pressures when he noted that his office is pressing forward with global, hemispheric, and five subregional or bilateral FTA negotiations simultaneously; while at the same time, USTR litigation activities have soared, with WTO disputes doubling over the last 5 years. The Trade Representative agreed with us that the intensifying trade agenda requires continual management improvements at USTR and supporting agencies, and he acknowledged that increased pressures demand “nothing less than a transformation of USTR.” However, he did not agree with our recommendation that USTR and other key trade agencies develop more systematic data and plans for allocating staff and resources across the full trade agenda. The Trade Representative wrote that our emphasis on a better allocation of staff and resources reflects an inaccurate assessment of how to allocate limited resources most effectively and efficiently. According to the Trade Representative, the main cause of strain at USTR is the amount of available resources, not their allocation. The Trade Representative maintained that USTR must be “agile, flexible and adaptable—not bureaucratic.” We believe that aligning goals and resources promotes the flexibility needed to respond to evolving circumstances. Our recommendation focuses on setting priorities among the multilateral, hemispheric, and FTA negotiations that take into account available staff. It also calls for coordinating those staff allocations with other agencies whose resources USTR routinely calls upon during the course of negotiations. Resource management fundamentally involves taking a given (and limited) amount of resources and deploying (allocating) it over program objectives aligned with the agency’s overall priorities. This approach frees managers to focus on its core program, not on continually reacting to the daily fluctuations of resource needs. The resources that USTR requested in fiscal year 2004 appear to have been justified based on its needs for completing the ongoing four FTAs (Australia, Morocco, CAFTA, and SACU). Since then, negotiations with the Dominican Republic, Bahrain, Panama, and the Andean countries have been announced. This increasing workload with its related demand for staff and travel can be better managed with (1) the collection of data to help managers understand what resources are linked to accomplishing agency objectives and (2) the use of these data in advance planning for future resource allocation, which can help USTR managers coordinate with other agencies whose own resources are affected by USTR negotiations. The Trade Representative listed several steps that USTR has taken to address its resource limitations. Although we recognize and encourage the steps that USTR has already taken to make improvements, we note that many of these efforts are already recognized in this report and are not sufficient to address our concerns for forward planning. The Trade Representative pointed to the fact that we did not identify any “misallocation of funds.” Solid data would permit sound conclusions about how federal funds are managed at USTR. The limited information that USTR and Commerce finally provided us had to be specially tabulated for this report because it is not routinely tracked. Our data show that FTAs involved considerable resources at both USTR and other agencies. Specifically, 37 percent of USTR travel funds were used for FTA-related travel in 2003, and 11 percent of USTR’s staff were involved in each of the six FTAs completed or negotiated FTAs in 2003. These data also show that other agencies account, on average, for more than three-fourths of the members of U.S. FTA negotiating teams, which averaged 106 members. Thus, USTR and other agencies commit significant resources on trade initiatives that cover 8 percent of total U.S. trade. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the U.S. Trade Representative, the Secretary of Commerce, the Secretary of the Treasury, the Secretary of State, and the Secretary of Agriculture. Copies will also be made available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-4128. Other GAO contacts and staff acknowledgments are listed in appendix VI. Senator Max Baucus, Ranking Minority Member of the Senate Finance Committee, and Representative Calvin Dooley asked us to examine the factors and process used to make decisions regarding the selection of free trade agreement (FTA) negotiating partners and the allocation of negotiating resources. In response, we (1) provided information about the factors that influence the selection of FTA partners and described how they were developed; (2) analyzed the interagency process for selecting FTA partners, including how the Office of the U.S. Trade Representative (USTR) coordinates the views of key agencies and consults with the Congress and business and nongovernmental groups; and (3) assessed how the executive branch makes decisions regarding the availability and allocation of resources to FTAs and other trade priorities, such as the regional talks of the Free Trade Area of the Americas (FTAA) and the multilateral talks at the World Trade Organization (WTO). To provide information about the factors that influence the selection of FTA partners and how they were developed, we reviewed pertinent documentation from key U.S. agencies involved in assessing potential FTA partners, such as USTR and the departments of State and Commerce. For example, we reviewed pertinent USTR documentation from 2000 to 2003 on FTAs, including public speeches, articles, and agency documentation on FTA partners. We also reviewed U.S. International Trade Commission documents on FTAs and Congressional Research Service reports on U.S. trade and FTAs. In addition, we interviewed knowledgeable officials at the key agencies involved in the process of assessing potential FTA partners. For instance, we interviewed the U.S. Trade Representative, the Deputy U.S. Trade Representative, and several assistant U.S. Trade Representatives; the Director of the Office of International Economics at the National Security Council (NSC); the Under Secretary for Economics, Business, and Agricultural Affairs at the Department of State; the Assistant Secretary for International Affairs at the Department of the Treasury; the Under Secretary of Farm and Foreign Agricultural Services at the Department of Agriculture; and the Under Secretary for International Trade at the Department of Commerce. To analyze the interagency process for selecting FTA partners, including how USTR coordinates the views of key trade agencies and consults with the Congress and business and nongovernmental groups, we reviewed pertinent documentation from key U.S. agencies involved in the process of selecting FTA partners. For example, we reviewed USTR documentation from 2000 to 2003 on FTAs, including public speeches, articles, agency documents, records of contacts with the U.S. Congress, records of public hearings, and papers on FTA partners prepared for the consideration of the Trade Promotion Staff Committee and Trade Promotion Review Group. Also, we interviewed officials at the key agencies involved in the process of assessing potential FTA partners. For example, we interviewed the U.S. Trade Representative, the Deputy U.S. Trade Representative, and several assistant U.S. Trade Representatives; the Director of the Office of International Economics at the NSC; the Under Secretary for Economics, Business, and Agricultural Affairs at the Department of State; the Assistant Secretary for International Affairs at the Department of the Treasury; the Under Secretary of Farm and Foreign Agricultural Services at the Department of Agriculture; and the Under Secretary for International Trade at the Department of Commerce. In addition, we obtained information from business and nongovernmental organizations, including the U.S. Chamber of Commerce, the Washington Office on Latin America, Oxfam America, Public Citizen, World Vision, and the Center for International Environmental Law. To assess how decisions are made regarding the availability and allocation of resources to FTAs and other trade priorities, we reviewed pertinent documentation from key U.S. agencies involved in assessing FTA partners, such as USTR and State and Commerce. For example, we reviewed USTR documentation from 2000 to 2003 on FTAs, including papers on potential FTA partners, lists of FTA negotiating teams, and budget and personnel- related data. Because negotiating lists were not complete for four of the negotiations, we asked USTR to provide summary numbers of the participating agencies. For the other two negotiations, we did our own analysis of agency staffing based on the negotiating lists provided by USTR. As noted in the text, these data merely identify the number of individuals involved and do not necessarily reflect staff effort. We determined that USTR data were sufficiently reliable for purposes of our assessment, even though, as our recommendation indicates, we determined that these data are not sufficiently robust for agency decision making and should be improved. Moreover, we interviewed knowledgeable officials at the key agencies involved in the process of assessing potential FTA partners. For instance, we interviewed the U.S. Trade Representative, the Deputy U.S. Trade Representative, and several assistant U.S. Trade Representatives; the Director of the Office of International Economics at the NSC; the Under Secretary for Economics, Business, and Agricultural Affairs at the Department of State; the Assistant Secretary for International Affairs at the Department of the Treasury; the Under Secretary of Farm and Foreign Agricultural Services at the Department of Agriculture; and the Under Secretary for International Trade at the Department of Commerce. Despite repeated requests to the NSC, we were unable to obtain key documents from the February 2002 and May 2003 meetings that provided guidance to the interagency efforts to formalize the criteria and enhance the process for developing recommendations to the President for selecting potential FTA partners. We conducted our review from June to November 2003 in accordance with generally accepted government auditing standards. Future European Union FTAs (under negotiation) Misc. U) Future European Union FTAs (under negotiation) Former Yugoslav Republic of Macedonia Gulf Cooperation Council (Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and the United Arab Emirates) Common Market of the South (members: Argentina, Brazil, Paraguay, and Uruguay) Overseas countries and territories (European Union) The following table presents trade data that describe the percentage and amount of total U.S. trade with current and potential FTA partners, as well as with non-FTA countries. Excluding the FTAA negotiation, the current FTA negotiating partners (CAFTA, SACU, Morocco, Australia, the Dominican Republic, Bahrain, Thailand, Panama, and the Andean countries) collectively account for about 4.7 percent of total U.S. trade. Of these nine partners, Australia contributes almost 1.2 percent, or about 25 percent of their combined trade. Chile and Singapore account for another 2.0 percent of U.S. trade. In contrast, NAFTA brought together the U.S.’s top two trading partners (Canada and Mexico), representing about 28 percent of total U.S. trade. Completing the FTAA negotiations would bring an additional 3.0 percent of total trade under FTA disciplines. In this appendix, we describe the background, considerations in FTA partner selection, milestones, features, concerns, and FTA partner participation for six countries with which the United States has or intends to have FTAs. We also describe those components for two regional entities—CAFTA and SACU. The United States and Australia are among the world’s most open economies. Both countries are prominent supporters of trade liberalization and have maintained a stable commercial relationship, having brought only a few dispute resolution cases against each other in the WTO. In 2002, Australia accounted for more than $13 billion in U.S. exports. Total two-way trade between the United States and Australia was almost $20 billion in that year as well. The United States and Australia have signed two bilateral agreements—the settlement on leather products trade in 1996 and the understanding on automotive leather subsidies in 2000. For several years, Australian officials told U.S. policy makers about Australia’s interest in an FTA with the United States. The current Prime Minister also raised this matter in meetings with President Bush. Until recently, though, the Bush administration had expressed interest but had not committed to begin negotiations. However, the FTA negotiations between the United States and Australia are starting from a strong base, given the similarity of the structure of their economies and the compatibility of their trade policies. USTR highlighted several reasons why Australia was selected as an FTA partner in 2002. First, two-way trade between the United States and Australia grew significantly in the past decade. In 2002, the United States exported $13.1 billion to Australia, the 13th largest destination of U.S. exports. It also imported $6.5 billion from Australia, the 28th largest source of U.S. imports. Second, the increased U.S. access to Australia’s market made possible by an FTA would further boost trade in both goods and services, enhancing employment opportunities in both countries. Third, an FTA would encourage additional foreign investment between the United States and Australia, adding to the many jobs that the significant investment flows between the two countries currently support. Fourth, an FTA would result in greater business integration, especially in the information technology sector, increasing efficiency and the competitiveness of the U.S. industry. Overall, U.S. manufacturers and services providers support these FTA negotiations. Finally, an FTA would address barriers that U.S. exports to Australia face today, including Australia’s use of sanitary and phytosanitary measures as a means of restricting agricultural trade. In November 2002, USTR notified the Congress that the United States intended to enter into FTA negotiations with Australia in at least 90 days. In February 2003, the United States and Australia started the first of six planned negotiating rounds. The United States and Australia had intended to complete the negotiations by the end of 2003, but negotiations will continue into 2004. The WTO requires that an FTA, at a minimum, substantially eliminate tariffs and other restrictions on mutually traded goods and services. However, the U.S.-Australia FTA is likely to be more comprehensive given the broad negotiating objectives that the governments have announced will cover agriculture, industry, and services issues. The U.S.-Australia FTA will negotiate 20 broad, trade-related issues, including market access for goods, agriculture, textiles, rules of origin, customs administration, sanitary and phytosanitary measures, technical barriers to trade, trade remedies, services, investment, telecommunications, financial services, competition policy, government procurement, electronic commerce, intellectual property, labor, environment, transparency, and institutional arrangements and dispute settlement. USTR leads the U.S. delegation with other delegation members, including the NSC; the departments of State, Commerce, Agriculture, Labor, Justice and the Treasury; the Environmental Protection Agency; and the Federal Trade Commission. The United States and Australia have a firm trade relationship, and their tariffs on most products are already very low. Therefore, critical issues in the FTA negotiations will be nontariff barriers and other issues. According to trade policy experts, agricultural issues will be the greatest challenge during these negotiations. For example, agriculture accounted for only 2.2 percent of U.S. exports to Australia but for 29.2 percent of U.S. imports from Australia in 2002. Some in the U.S. agricultural community oppose the negotiations. The most recent round of negotiations took place in October 2003. Each side has presented its own proposals and raised concerns regarding agricultural issues. Australia takes issue with U.S. tariff-rate quotas on dairy products, sugar, beef, and many other products. Australia announced that it would seek the removal of these quotas during the FTA negotiations. Government-run commodity boards control Australian exports of wheat and rice. Because these boards restrict U.S. exports, the United States has targeted them for removal during the FTA negotiations. Separately, the United States has also targeted specific Australian sanitary and phytosanitary measures because they are highly restrictive and have adversely affected U.S. exports of citrus, apples, pears, corn, stone fruit, chicken, and pork. These bilateral discussions are proceeding on a parallel track to resolve the sanitary and phytosanitary issues between the United States and Australia. Because all foreign investment in Australia is subject to government screening and approval, the United States has noted Australia does not conform to the principle of national treatment—that is, treating foreign investors no less favorably than domestic investors. As a result, the United States will seek the elimination or reduction of these trade-distorting investment measures. Even after resolving these irritants, U.S. officials are concerned that, after the implementation of this FTA, the United States may face many disputes on agricultural matters and other issues with Australia, one of its closest allies. Australia is a WTO member and has had FTAs with New Zealand since 1966 and with Singapore since 2003. Australia and the United States are founding members of the Asian-Pacific Economic Cooperation (APEC) forum, an organization of 21 countries that has established the goal of free trade and investment in that region by 2020. Bahrain is an emerging regional financial center in the Persian Gulf region. The United States has been holding talks on economic policy with the Gulf Cooperation Council (GCC), of which Bahrain is a member, through the U.S.-GCC Economic Dialogue. In 2001, the United States and Bahrain signed a bilateral investment treaty. On June 18, 2002, the two countries signed a Trade and Investment Framework Agreement, which enabled the United States to increase its engagement with Bahrain on economic reforms and on bilateral trade and investment issues. USTR emphasized several reasons for selecting Bahrain as an FTA partner. First, an FTA with Bahrain would support U.S. security and political goals by increasing prosperity and globalization in the region. Second, the executive branch views the U.S.-Bahrain FTA as a stepping-stone to an eventual Middle East Free Trade Area (MEFTA). Bahrain could become the hub of a subregional block of countries that might develop closer and more open trading relationships with the United States. Third, Bahrain has been an important U.S. ally in the region. Fourth, USTR emphasized Bahrain’s readiness to undertake an FTA with the United States, particularly in comparison with other states in the Persian Gulf region. U.S. officials emphasized the commitment among the highest levels of the Bahraini government to make strong economic and political reforms to facilitate trade. Bahrain made economic reforms in areas such as property rights and copyright laws and is an emerging regional financial center. The country also made political reforms, such as strengthening its parliament. According to USTR officials, an FTA could be completed relatively quickly with Bahrain because of its small size and reform-minded outlook. Finally, USTR officials emphasized that an FTA with Bahrain would generate opportunities for U.S. business. In January 2003, the King of Bahrain raised the idea of an FTA in a meeting with President Bush. In May 2003, the USTR met with the Bahraini Crown Prince and announced the executive branch’s plans for negotiating an FTA with Bahrain. On August 4, 2003, USTR notified the Congress of the administration’s intent to initiate negotiations for an FTA with Bahrain. The target date for beginning negotiations is January 2004. The U.S.-Bahrain FTA is expected to be a key part of the U.S.-MEFTA that the United States is supporting to address the related problems of terrorism and poverty in the region. According to the World Bank, unemployment in the Middle East is estimated conservatively at around 15 percent, and the labor force could expand by as much as 40 percent in the next 10 years. In addition, USTR notes that the region has extremely low rates of internal trade. The United States hopes that MEFTA could encourage economic reforms that would spur investment and increase opportunities in the region. In Jordan, for example, which signed an FTA with the U.S. in 2001, exports to the United States grew by 72 percent in 2002, and the United States is now Jordan’s biggest trading partner. USTR has outlined a step- by-step approach to building a MEFTA that takes into account the different developmental and economic levels of the countries in the region. These steps include supporting the potential partner country’s membership in the WTO; expanding the Generalized System of Preferences (GSP) Program to increase U.S. trade with the Middle East; signing bilateral investment treaties, trade and investment framework agreements, and ultimately FTAs; and providing financial and technical assistance for trade capacity- building. The President’s Middle East Partnership Initiative will help direct more than $1 billion per year from U.S. government agencies to support trade in the Middle East. Despite Bahrain’s and the U.S.’s interest in establishing an FTA, the U.S. government officials with whom we spoke described regional influences that may serve as potential obstacles to countries in the Persian Gulf region that would like to make progress on trade with the United States. Bahrain is a member of the GCC customs union, which is still developing its trade rules. In 1989, the European Commission and the GCC signed a Cooperative Agreement that contains a commitment from both sides to enter into FTA negotiations. The two entities are now actively pursuing FTA talks. Preceding the U.S.-Chile FTA negotiations, which began in December 2000, Chile undertook political and economic reforms. These reforms positioned the country to implement a comprehensive trade agreement. Before the negotiations, Chile deregulated and restructured its economy and opened its trade ties to industrial countries. For example, in 1994 Chile reacted positively to the possibility of becoming a party to NAFTA but negotiations ceased, due in part to the expiration of the U.S. President’s fast-track negotiating authority. However, the Congress did not grant the President such authority for 8 years; with this delay, the accession of Chile to NAFTA did not occur. Despite this delay, the U.S.-Chile Joint Commission on Trade and Investment was founded on the occasion of President Clinton’s visit to Chile in 1998. The commission established a work program to address a variety of bilateral trade and investment issues and facilitated the exchange of trade information. Thus, both countries were prepared to negotiate a comprehensive FTA when the negotiations began. A variety of factors may have contributed to the U.S.’s decision to initiate an FTA with Chile. First, U.S. exports faced a 6 percent Chilean tariff, while exports from Chile’s existing FTA partners entered the Chilean market duty-free. Chile had therefore reduced its purchases of U.S. exports by almost one-third from $4.38 billion in 1997 to $3.13 billion in 2001 in favor of relatively cheaper goods from its FTA partners. An FTA with Chile provided the opportunity to reduce this tariff that had disadvantaged U.S. exports. USTR noted that the FTA would ensure that U.S. businesses and investors received treatment equal to or better than Chile’s other FTA partners. Second, Chile adopted economic reforms, such as the elimination of price controls and the privatization of state-owned enterprises, that signaled that Chile was willing to implement a mutually beneficial FTA by solidifying these reforms. Finally, through FTA negotiations, the United States hoped to build Chile’s support for important issues in the FTAA negotiations. For example, the U.S.-Chile FTA negotiations better defined key negotiating issues in areas such as labor and the environment and demonstrated to other countries participating in the FTAA negotiations the U.S.’s interest in furthering trade liberalization. On November 29, 2000, the Clinton administration announced its intention to negotiate a comprehensive FTA with Chile. Negotiations began on December 6, after U.S. and Chilean officials agreed on the initial list of topics to be discussed and the organization of negotiating groups. During the negotiations, and following the change in U.S. administration, the U.S. and the Chilean presidents declared their intention on April 16, 2001, to complete the agreement by the end of that year with meetings scheduled to occur approximately once a month through the end of 2001. However, due to the complexities of some trade topics, the negotiations would require an additional year and would include 14 negotiating rounds. Following the completion of these negotiating rounds, on December 11, 2002, USTR announced that an agreement had been reached. On June 6, 2003, USTR and the Chilean Foreign Minister signed the agreement. USTR then submitted draft FTA implementing bills to the Congress on July 15, 2003. The House of Representatives and the Senate passed the U.S.-Chile Free Trade Implementation Act on July 24 and July 31, 2003, respectively. President Bush signed the act on September 3, 2003. USTR expects the FTA to be implemented on or after January 1, 2004. The FTA is comprehensive in its treatment of industrial and agricultural products and, according to USTR, provides a template to demonstrate to other FTA partners the U.S.’s high expectations with regard to the scope of FTAs. For example, the negotiations encompassed trade in all goods, with approximately 85 percent of U.S.-Chilean trade in industrial and commercial goods becoming duty-free upon the agreement’s implementation. In addition, 75 percent of trade in agricultural products will become duty-free during the first 4 years following implementation. The FTA will also increase each country’s market access to a wide range of services. Some Members of Congress and certain nongovernmental organizations have expressed concern about the use of the U.S.-Chile FTA as a model for negotiations with other FTA partners, particularly with regard to the agreement’s provisions concerning labor standards and the temporary entry of professionals. For example, certain Members and labor interests have argued that the FTA’s labor provisions may be adequate for countries, such as Chile, that maintain stringent labor standards but such provisions may not be as appropriate for other countries that have not maintained or enforced strong labor laws. In addition, certain Members have raised concern with regard to the FTA’s provisions facilitating the entry of professionals, stating that such provisions touch upon immigration laws that are within the purview of the Congress and should not be amended through trade agreements. Successive Chilean governments have pursued trade liberalization strategies and export-oriented development policies, resulting in FTAs with Canada in 1997; Mexico in 1999; Central America and the European Union in 2002; and South Korea in 2003. In addition, Chile signed economic complementation agreements with Argentina in 1992; Venezuela, Colombia, and Bolivia in 1993; Ecuador in 1994; and Peru in 1998. Chile has also enacted an association agreement with the member countries of the Southern Common Market in October 1996. Finally, Chile joined the APEC organization in 1992 to boost commercial ties to Asian markets and is currently involved in negotiations for an FTAA in the western hemisphere. The Dominican Republic is the largest economy in the Caribbean Basin region. The trading relationship between the United States and the Dominican Republic has been shaped by the Caribbean Basin Initiative, which is a series of U.S. laws and programs beginning in 1983 that established unilateral U.S. trade preferences for goods from the Dominican Republic and 23 other countries in the region. In October 2002, the two countries held their first meeting under the U.S.-Dominican Republic Trade and Investment Council to deepen trade relations. When the United States began pursuing an FTA with the five Central American countries in 2002, the Dominican Republic expressed concern that it would suffer adverse economic consequences if it were not also included in the agreement. However, the United States did not support the request, in part because it did not believe the Dominican Republic had exhibited sufficient commitment to negotiate and implement a comprehensive FTA with high- levels of commitment. In response, the Dominican government took steps to address some problematic issues, and aligned themselves more with the United States in multilateral trade forums. According to USTR officials, the Dominican Republic was the first FTA partner that was selected under the new interagency process established in May 2003. USTR has emphasized several reasons for the selection of the Dominican Republic as an FTA partner. First, according to USTR, an FTA with the Dominican Republic would help support the broader U.S. trade strategy of competitive liberalization because the Dominican Republic would continue to uphold U.S. positions in the WTO and FTAA negotiations. Second, the FTA could bring economic and commercial benefits to the United States by increasing market access and creating more jobs. The Dominican Republic is the largest U.S. trading partner in the Caribbean, and USTR has described the country as an economic engine in the region. The combined markets of the Dominican Republic and the CAFTA countries would be larger than Brazil and would become the second-largest U.S. trading partner in Latin America. Third, the Dominican Republic was selected because the FTA would support U.S. efforts to strengthen democracy and the rule of law. For example, the United States plans to push for the inclusion of strong anticorruption and transparency requirements in the agreement. Fourth, the Congress has instructed the executive branch through the Caribbean Basin Initiative to enter into mutually advantageous FTAs with countries included in this initiative. Fifth, there appears to be broad bipartisan support in the Congress for this FTA. Sixth, the Dominican Republic has made clear progress in terms of its readiness to negotiate an FTA with the United States, according to USTR. For example, the Dominican government familiarized itself with the U.S.- Chile FTA and improved its protections of intellectual property rights, including satellite broadcasts and antipiracy provisions, in response to U.S. concerns. According to USTR, there is a clear willingness at the highest levels of the Dominican government to meet U.S. requirements for FTA partners. Seventh, there is strong support for an FTA among U.S. industry and agricultural exporters, including from such groups as the U.S. Chamber of Commerce. The Dominican President met with President Bush in July 2002 to request an FTA with the United States. In a joint statement issued at a March 2003 meeting of the U.S.-Dominican Republic Trade and Investment Council, the United States acknowledged the steps that the Dominican Republic had taken so far to improve its trade policy and stated its willingness to consider adding the Dominican Republic to CAFTA. On August 4, 2003, following a meeting with the Congressional Oversight Group, the Trade Representative formally notified the Congress of the executive branch’s intent to initiate FTA negotiations with the Dominican Republic. The target date for starting negotiations is January 2004. USTR hopes to conclude negotiations in March 2004. USTR plans to integrate the Dominican Republic into the FTA it is already negotiating with five Central American countries. Officials will propose that the Dominican Republic accede to the framework of CAFTA as it is being discussed, after which the talks will focus on market access issues. USTR hopes to present the Congress with one agreement for CAFTA countries and the Dominican Republic. Given the short time frame, integrating the Dominican Republic may be challenging. In fall 2003, USTR is to consult with the Dominicans about the Chile and Singapore FTAs to explore the extent of Dominican support in adopting provisions similar to those in these agreements. Other concerns involve the State Department’s identification of the Dominican Republic as a country that does not fully comply with minimum standards in the trafficking of persons. The Dominican Republic has FTAs with the Caribbean Community (CARICOM) and the Central American countries. Morocco is a U.S. ally in the war against terrorism and a long-time democratic partner in the Arab world. The U.S.-Morocco Bilateral Investment Treaty, signed in 1991, provided protections to U.S. investors in Morocco. In 1995, the United States signed a trade and investment framework agreement with Morocco to promote freer trade, increased investment, and stronger economic ties between the two countries. Moreover, the 2001 “open skies” agreement between the United States and Morocco supported increased air passenger and cargo links between the two countries. According to USTR, Moroccan supporters of an FTA with the United States cited the benefits that Jordan attained after it signed an FTA in 2001 as a reason for desiring a U.S.-Morocco FTA. USTR emphasized several reasons for selecting Morocco as an FTA partner. First, USTR officials noted that a trade agreement with Morocco would further the executive branch’s goal of promoting openness, tolerance, and economic growth across the Muslim world. Second, Morocco has been a staunch ally in the war against terrorism. Third, the agreement would ensure stronger Moroccan support for U.S. positions in the WTO negotiations. Fourth, according to USTR, an FTA with the United States would enable Morocco to strengthen its economic and political reforms, such as its recent program to liberalize and privatize key sectors, and help promote sustainable development and environmental protection. The FTA would emphasize transparency, which would help make Morocco’s government institutions more accountable. Fifth, the United States is expected to benefit economically from an FTA with Morocco because the agreement would eliminate tariffs and other unjustified barriers to trade between the two countries. Morocco currently taxes U.S. products at an average of 20 percent, while the United States only poses a 4 percent tariff on Moroccan products. A U.S.-Morocco FTA would also help protect U.S. investments in Morocco and level the playing field with the European Union, with which Morocco has an association agreement. Growth prospects for U.S. products and services, such as energy and tourism, also exist. Finally, USTR officials noted that Moroccan negotiators were well prepared to undertake FTA negotiations with the United States because they had studied the U.S.-Jordan FTA. On April 23, 2002, President Bush and the Moroccan King announced that their two countries would seek an FTA. USTR notified the Congress of its intent to negotiate an FTA with Morocco on October 1, 2002. On November 21, 2002, USTR convened a public hearing on the U.S.-Morocco FTA. Negotiations started on January 21, 2003. On July 22, 2003, four U.S. legislators announced the creation of the Moroccan Caucus, whose purpose is to support increased trade and stronger ties between the United States and Morocco. Because the target date for completing negotiations of December 2003 was not met, negotiations will continue in 2004. The executive branch views the U.S.-Morocco FTA as a key to underpinning the President’s broader Middle East trade strategy. The agreement builds upon the FTAs with Jordan and Israel and might serve as a model for other North African and Middle Eastern countries interested in increased trade. U.S. executive branch officials hope that Morocco will become a hub for subregional integration and in turn serve as one of several subregional centers that could be built into a MEFTA. The U.S. Agency for International Development will provide assistance for trade capacity-building programs to help Morocco meet the obligations involved in signing and implementing an FTA with the United States. The United States will also provide technical assistance in areas such as agriculture sector reform, which are likely to be sensitive. U.S. assistance will also focus on civil society and business groups in order to strengthen public input to the negotiating process and maximize the benefits of an FTA for Morocco. The U.S. Agency for International Development estimates that these activities will cost between $40 million and $48 million over 5 years. Morocco may face complex decisions in its agricultural sector, which employs 40 percent of Morocco’s workforce. Morocco signed the Euro-Mediterranean Association Agreement with the European Union in 1996. As part of the Barcelona Process, which envisions a free trade zone stretching across Europe and in North Africa by 2010, Morocco has signed FTAs with several other North African countries. According to USTR, agriculture was generally excluded from the association agreement with the European Union, and U.S. exporters could gain significant advantages under an FTA with Morocco. Singapore has been a long-time proponent of trade liberalization. However, a U.S. trade official noted that the announcement of the intention to negotiate a U.S.-Singapore FTA at the APEC conference in November 2000 was unexpected, but the selection was based on the Clinton administration’s interest in completing an FTA with a relatively large trading partner that maintained an open economy. In addition, as Singapore’s economy did not include many sectors sensitive to U.S. producers, the Clinton administration hoped to conclude the FTA quickly, while establishing a model for future FTAs. The negotiation of a U.S.-Singapore FTA in 2000 may have been motivated by various factors. First, an FTA with Singapore furthered the Clinton administration’s emphasis on access to big emerging markets. The year negotiations began, Singapore was the 10th largest U.S. trading partner, and the value of U.S.-Singapore trade had doubled since the early 1990s, according to Commerce. In addition, many U.S. corporations invest in Singapore as a regional base for exports and production, thereby making the United States the largest foreign investor in Singapore. Second, an FTA with Singapore is the first such agreement between the United States and an Asian country, and this agreement offered an opportunity to strengthen U.S. relations with a region experiencing economic integration and expanding trade. For example, as Singapore has undertaken efforts to liberalize trade and attract multinational corporations, USTR noted that this FTA may serve as a foundation for the Enterprise for ASEAN Initiative. Third, both countries maintain mutual security interests, and since 1992 the U.S. military has had access to facilities in Singapore, which facilitates military deployments to strategic locations. In addition, Singapore has supported the U.S. military’s continued presence and opposes any ASEAN defense arrangements that might withdraw U.S. armed forces from Asia. Fourth, the Congress and the U.S. business community undertook efforts to support an FTA with Singapore. For example, before the negotiations, legislation was introduced in the Congress that would have authorized the President to enter into an FTA with Singapore and would have provided for expedited congressional consideration of the agreement. Business support included a 1999 visit to Singapore by 22 U.S. business executives to discuss with the Singaporean Prime Minister the possibility of establishing an FTA and strengthening U.S.-Singapore ties. President Clinton and the Prime Minister of Singapore announced an agreement to negotiate an FTA during the APEC conference in November 2000. Negotiations then began under the Clinton administration in December 2000 and concluded under the Bush administration in November 2002. Following 11 rounds, USTR announced on January 15, 2003, that agreement had been reached; on January 30, 2003, the executive branch notified the Congress of its intent to sign the FTA. President Bush and the Singaporean Prime Minister signed the agreement on May 6, 2003. USTR sent the draft FTA implementing legislation to the Congress in June 2003, and the House and Senate passed the legislation on July 24 and July 31, 2003, respectively. President Bush signed the FTA implementing legislation on September 3, 2003. January 1, 2004, is the scheduled date for the FTA’s implementation. Preceding the FTA negotiations, the United States and Singapore had signed a Trade and Investment Framework Agreement, and 99 percent of U.S. exports already entered Singapore duty-free. In addition, both countries have maintained relatively open investment regimes. Thus, the FTA is expected to have relatively little impact on U.S. exports, and the elimination of nontariff barriers will provide the majority of benefits. However, USTR has commented that the FTA serves as a model for future FTAs due to its comprehensive scope and the inclusion of commitments not covered in earlier FTAs. For example, according to USTR officials, the text of the U.S.-Singapore FTA has served as a template to demonstrate to future FTA partners the comprehensive scope that the United States expects in FTAs. Certain Members of Congress and some labor and environmental groups have expressed concern over (1) the possible impact of the U.S.-Singapore FTA and (2) the use of the FTA as a template for other agreements. Specific concerns include the potential threat to U.S. producers in import- competing sectors, such as U.S. manufacturers of electronic equipment and other machinery, and the possible negative environmental effects, such as increased pollution from industrialization. In addition, certain Members have also expressed concern about some of the agreement’s provisions, including those relating to the temporary entry of professionals, which they say impinge on U.S. immigration law without congressional input, and the agreement’s Integrated Sourcing Initiative, which some Members claimed expands trade benefits under the U.S.-Singapore FTA to territories outside of Singapore, although these territories have not assumed key obligations that the Congress has insisted should be included in FTAs. Singapore is party to many preferential trade agreements, with the majority of these agreements only recently implemented. For example, while Singapore has been a member of the ASEAN Free Trade Area since 1992, only since January 2001 has Singapore entered into an FTA with New Zealand. In addition, in January 2002, Singapore concluded an FTA with Japan, which excludes agricultural products; effective January 2003, Singapore implemented an FTA with the European Free Trade Association. In February 2003, Singapore signed an FTA with Australia and has been negotiating FTAs with Mexico and Canada since 2000 and 2001, respectively. In addition, a study group was established in November 2002 to explore a possible FTA between Singapore and South Korea. Since the late 1980s, the countries of Central America have been moving from civil conflict toward peace and democracy. The U.S.-Central American trading relationship has been shaped by the Caribbean Basin Initiative, which promotes economic growth in the region through a series of unilateral U.S. trade preferences for 24 countries. President Clinton stressed the commitment of the United States to expanding trade between the United States and Central America at a 1997 summit with leaders from Central America and the Dominican Republic. President Bush has continued the push for increased free trade with Central America. USTR emphasized several reasons why the CAFTA countries (Costa Rica, El Salvador, Guatemala, Honduras, and Nicaragua) were selected as FTA partners. First, CAFTA would help lock in and broaden the economic and political reforms made in these countries throughout the 1990s. For example, elements of the FTA that require increased transparency could help counter corruption and support government accountability in the CAFTA countries. Second, pursuing an FTA with the CAFTA countries would complement U.S. goals in the FTAA and the WTO, particularly given the support of CAFTA countries for U.S. negotiating positions. The agreement would also support the ongoing economic integration of the region. Third, an FTA would enable the United States to address market access barriers in the CAFTA countries and thus promote U.S. exports to the region and increase U.S. access to more affordable goods. Under the Caribbean Basin Initiative, U.S. tariffs on Central American goods are already low, with 74 percent of CAFTA country imports entering the United States duty-free in 2002. An FTA would enable the United States and the CAFTA countries to have reciprocal tariff levels and would remove the requirement that Caribbean Basin Initiative preferences be reviewed every year. A fourth reason for the selection is country readiness. The CAFTA countries are familiar with U.S. approaches to trade because they have concluded a NAFTA-like agreement with Mexico in 2000. Fifth, the Congress has instructed the executive branch through the Caribbean Basin Initiative to enter into mutually advantageous FTAs with Central American countries. Finally, the U.S. business community is interested in the potential gains they could see from CAFTA. Some 40 percent of total goods imported by Latin America come from the United States, thereby making the region an important market for some U.S. sectors. In September 2001, the Bush administration held talks on free trade with the CAFTA countries. In January 2002, Bush announced that the United States would explore an FTA with these countries. Starting in February 2002, USTR held seven workshops with the CAFTA countries to ensure they would be able to develop and implement an FTA with the United States. In October 2002, following a meeting of the Congressional Oversight Group, President Bush formally notified the Congress of his intention to begin FTA negotiations with the CAFTA countries. USTR convened a public hearing on CAFTA in November 2002. Working-level negotiations started in January 2003 and concluded in December 2003, without Costa Rica. The United States hopes to sign the agreement— which could include a component with the Dominican Republic—by spring 2004. There are five negotiating groups for the CAFTA negotiations. The decision to establish only five negotiating groups reflects the CAFTA countries’ interest in consolidating the negotiations, given their limited negotiating resources. In addition to these five working groups, there is also a nonnegotiating, multiagency effort responsible for trade capacity- building. This capacity-building effort includes projects to increase citizen access to trade negotiations, support the negotiating teams, strengthen food safety inspection systems, and enhance the implementation of labor laws. As part of these efforts, each country identified its needs in a National Trade Capacity Building Strategy. Other agencies involved in trade capacity-building include the U.S. Agency for International Development and the Inter-American Development Bank. The executive branch made a $47 million budget request for U.S. capacity-building assistance in the region in 2003. Some civil society groups and Members of Congress are concerned that the CAFTA agreement will not adequately address their labor and environmental concerns in the CAFTA countries. There is concern that USTR may support language of the U.S.-Chile FTA, which calls for countries to enforce their domestic labor laws. Some civil society groups and Members believe this approach is not appropriate for the CAFTA countries because their labor laws are not as stringent as Chile’s laws. Similarly, some civil society groups claim that the environmental commitments stemming from the FTA may not build upon existing programs or preclude investor lawsuits that could undermine environmental laws. Finally, there is concern that there has not been a sufficient mechanism for public input. Market access for agricultural goods and textiles is another potential area of contention. Two Members have expressed concern that the CAFTA countries are reluctant to lower tariffs on U.S. agricultural products. The U.S. sugar industry and some U.S. textile and apparel producers have also expressed concern about heightened competition from CAFTA suppliers. The CAFTA countries are members of the Central American Common Market. In addition, these countries have negotiated more than 20 FTAs with such countries as Mexico, Canada, and several South American countries. The Southern African Customs Union, which is comprised of Botswana, Lesotho, Namibia, South Africa, and Swaziland, accounted for almost one- half of the gross domestic product in sub-Saharan Africa and for $2.5 billion in U.S. exports to the region in 2002. Total two-way trade between the United States and SACU was more than $7 billion that year. South Africa has the largest economy among the SACU countries and the United States and South Africa have had a trade and investment framework agreement since 1999. The 2000 African Growth and Opportunity Act (AGOA) declares that FTAs should be negotiated with sub-Saharan African countries to serve as catalysts for trade and for U.S. private-sector investment in the region. As a result, by moving from one-way trade preferences to a reciprocal FTA with SACU, the United States expects to build on the success of AGOA and to deepen U.S. political and economic ties to sub-Saharan Africa. The United States also hopes to lend momentum to U.S. development efforts in the region by encouraging greater foreign direct investment and promoting regional integration and economic growth. USTR noted several reasons why the SACU countries were selected as FTA partners. For instance, in pursuing an FTA with SACU, the executive branch responded to Congress’s direction to negotiate FTAs with sub- Saharan countries, as expressed in AGOA. USTR emphasized that the SACU countries are ready, individually and collectively, to be free trade partners. An FTA with the SACU countries would strengthen growing bilateral commercial ties between the United States and these countries and address barriers in these countries to U.S. exports. These barriers include high tariffs on certain goods, overly restrictive product licensing measures, inadequate protection of intellectual property rights, and restrictions the SACU governments impose that make it difficult for U.S. service firms to do business in these countries. An FTA would offer an opportunity to improve southern Africa’s commercial competitiveness and to better position the region for success in the U.S market and the global economy. In addition, an FTA would help the SACU countries attract much-needed new foreign direct investment because international investors prefer access to a large and integrated market. An FTA might also level the playing field in areas where U.S. exporters are disadvantaged by the European Union’s FTA with South Africa. Finally, this FTA would reinforce the economic reforms that have taken place in the SACU countries and might encourage additional progress where needed. In November 2002, USTR notified the Congress that the United States intended to enter into FTA negotiations with SACU in at least 90 days. The United States and SACU intend to complete the negotiations by December 2004. A U.S.-SACU FTA agreement is likely to be comprehensive because the governments have announced broad negotiating objectives that cover agriculture, industry, and services issues. The United States is committed to providing the technical assistance necessary for SACU countries to assume the responsibilities of full partnership and to share in the benefits of free trade. The United States and SACU have established a special cooperative group on trade capacity-building specifically for these negotiations, with $2 million in initial funding from the U.S. Agency for International Development. This group is to meet regularly during the negotiations to identify needs and swiftly direct technical assistance resources to help SACU countries better prepare for and participate in negotiations, implement agreed-upon commitments, and take advantage of free trade. Several groups representing U.S. retailers, food distributors, and metal importers have supported the reduction of U.S. tariffs on SACU goods. Groups representing service industries and recycled clothing have favored removing tariff and nontariff barriers in the SACU market. However, other groups have opposed the additional opening of U.S. markets to SACU goods. Agriculture, steel, and the textile and apparel industries are expected to monitor negotiations closely. The SACU countries are members of the WTO. South Africa has had an FTA with the European Union since 2000. The following are GAO’s comments on the U.S. Trade Representative’s letter dated December 3, 2003. 1. As the Trade Representative states, if the 43 percent of U.S. trade that is accounted for by the EU-25, Japan, Korea, and China is excluded, then current and announced FTA negotiations account for 69 percent (according to our calculation) of the remainder of total U.S. trade. However, U.S. trade with existing FTA partners (Canada, Chile, Jordan, Mexico, Israel, and Singapore) accounts for the majority of this. The trade data can be segmented in several ways, but the data show that trade partners with which the U.S. has begun or has announced FTA negotiations account for $178 billion in two-way trade with the United States, or about 8 percent of the $2.3 trillion total U.S. trade. 2. We believe that given its admittedly limited available resources, USTR needs to better manage its staffing and funds to implement its growing and complex trade negotiating agenda. As discussed in this report, USTR’s main strategy for undertaking multiple FTA negotiations appears to be working on one FTA per region at a time. Assistant USTRs in four regional offices lead FTA negotiations in each of four regions. With the announcement of three new FTA negotiations—the Dominican Republic, the Andean countries, and Panama—in Latin America alone, it is not clear how USTR will be able to meet its new and ongoing negotiating demands in a timely fashion. We have noted in this report that one factor that constrains negotiations is a limited number of regional and functional specialists. To address these challenges, USTR would do well to develop a resource strategy across its entire negotiating agenda that is based on solid data and planning. 3. While we appreciate USTR’s efforts in pursuing intensive trade negotiations in an often unpredictable international environment, this situation makes it all the more important to make staffing and resource decisions based on valid and reliable data and planning. Relying on informal, ad hoc decision making increases risk and reduces the chance that the agency will accomplish its goals. The human capital model that we developed calls for organizations, regardless of size, to use solid data to determine the current and future human capital required to support their mission and goals. 4. Just like other federal agencies, USTR is responsible for standard accountability procedures to manage its program and federal funds. Our recommendation calls for a result—not specific procedures or output measures. Since its own and other agencies’ expert staff are the most substantial resources for FTA negotiations, improving upon the present lack of systematic data would better position USTR and other agencies to make decisions that involve staffing trade-offs among competing priorities. In addition, travel is an important resource component and must be programmed in advance. While we recognize and encourage the steps that USTR has already taken to make improvements, we note that these efforts are already recognized in this report and are not sufficient to address our concerns for forward planning. In addition to those named above, Martin De Alteriis, Francisco Enriquez, Bradley Hunt, Rona Mendelsohn, Juan Tapia-Videla, Timothy Wedding, and Eve Weisberg made major contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e- mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Free trade agreements (FTA) involve trade liberalization between the United States and selected countries or regions and are also expected to provide economic and other benefits. GAO was asked to review how potential FTA partners are selected, in view of the increased number of FTAs and their growing importance to U.S. policy. Specifically, GAO (1) provided information about the factors influencing the selection of FTA partners, (2) analyzed the interagency process for selecting FTA partners, and (3) assessed how the executive branch makes decisions about the availability and allocation of resources to FTAs. The Trade Representative used 13 factors in selecting four potential FTA partners in 2002 (Australia; the Central American Free Trade Area, a subregional group of five Central American countries; the Southern Africa Customs Union of five countries; and Morocco). Subsequently, selected executive branch agencies decided to use six broad factors--country readiness, economic/commercial benefit, benefits to the broader trade liberalization strategy, compatibility with U.S. interests, congressional/private-sector support, and U.S. government resource constraints. These decisions are not mechanical, and the factors cited most often regarding the selected FTA partners primarily reflect U.S. trade strategy, foreign policy, and foreign economic development goals. The interagency process for selecting FTA partners now involves four interagency groups that use decision papers to assess potential FTA partners and make recommendations that eventually go to the president. This new process is more systematic and inclusive than the process previously used. The Office of the U.S. Trade Representative (USTR) reports that it routinely considers the Congress's views in making selections. Decisions about FTA partners are made with little systematic data or planning regarding trade-offs with other trade priorities, even though FTAs are resource intensive. USTR staff and travel funds are heavily committed to FTAs, and USTR relies on specialists at other agencies as well. As more FTAs are contemplated, existing mechanisms may prove inadequate to the task of aggressively pursuing a bilateral FTA agenda while remaining engaged in regional and multilateral forums.
The 1996 Military Housing Privatization Initiative allows private-sector financing, ownership, operation, and maintenance of military family and unmarried junior servicemember (barrack) housing. Under the program, the department can provide direct loans, loan guarantees, and other arrangements to encourage private developers to renovate existing housing or construct and operate housing either on or off military installations. Servicemembers, in turn, may use their housing allowance to pay rent and utilities to live in the privatized housing. Because the program represents a new way of doing business for both the military and the private sector, DOD has relied on consultants for a variety of advisory and assistance services. In completing privatization agreements, many financial, budgetary and other issues need to be resolved to the satisfaction of the government, developers, and private lenders before a deal can be closed. Further, each privatization agreement is different and involves unique issues. According to DOD officials, consultants provide the necessary expertise and assistance to help resolve these issues. Initially, DOD established the Housing Revitalization Support Office in OSD to facilitate implementation of the military housing privatization program. This office established the financial and legal framework for the new initiative and provided assistance to the services as they began to consider housing privatization. Initial progress in implementing the program was slow and, in 1998, DOD shifted primary responsibility for implementing the program to the individual services. With this change, the Housing Revitalization Support Office was eliminated, and housing privatization oversight responsibility was assigned to a newly created office in OSD—now known as the Housing and Competitive Sourcing Office. This office establishes DOD policy for the program and monitors the services’ implementation of the program. Concerned about the lack of progress with the military’s housing privatization program, Congress in 1998 required OSD to begin reporting quarterly on the status of all privatization projects for which funds had been appropriated. In addition, in 2000, Congress required that DOD report information quarterly on expenditures for consultants used by the services to implement the program. DOD now includes this information in its Military Housing Privatization Initiative Housing Privatization Report to the Congress. The report lists each privatization project, identifies the number of units to be privatized, shows the project milestones, and includes the cumulative amount spent on consultants by project and service. Military construction appropriations fund the military housing privatization program, including privatization support and consultant expenditures. Privatization support includes costs for consultants, federal civilian salaries, and training and travel activities. Some of the services also include costs for environmental assessments; land boundary surveys; and supervision, inspection, and overhead construction activities. Consultant costs generally include costs for advisory and assistance activities, such as individual project development, solicitation development and preparation, pre-award evaluations of project proposals, and financial and real estate analysis. Although DOD reported to Congress that the services plan to privatize most of their family housing units by the end of fiscal year 2005, these reports do not include the number of privatized units that have been renovated or newly constructed. Such data would show the program’s progress in creating adequate family housing and the status of improvements to the living conditions of the servicemembers and their families. These renovation and construction numbers should accelerate over time. As of March 2003, the military services had signed contracts privatizing about 28,000 family housing units and plan to privatize a total of about 140,000 units by the end of fiscal year 2005. The services plan to privatize 72 percent of their total family housing inventory, representing about 183,000 units, as shown in figure 1, by fiscal year 2007 instead of by 2010 as originally scheduled. As a result of these privatization contracts, as of March 2003 the services had constructed 4,396 new housing units and renovated 3,184 existing units—total of 7,580 units (see table 1). We recognize it can take developers several years to renovate existing housing units or construct new ones after the military housing is privatized. However, data regarding this process, although maintained at the installation level, are not collectively tracked and reported to Congress by OSD. Thus, decision makers do not have complete data to fully assess the housing privatization program’s progress. Furthermore, as the privatization program progresses, it will become increasingly important to have complete data on the status of actual renovation and new construction of privatized housing units on which to determine how quickly the program is creating adequate family housing and improving the living conditions of the servicemembers and their families. According to the services’ budget data, costs for consultants are less than half of the services’ total privatization support costs, actual and projected. For example, for fiscal year 2002 consultant costs were about $24 million, or about 42 percent, of the services’ total support costs of about $57 million for their housing privatization efforts. Furthermore, the services incur other privatization support costs besides the costs for consultants, such as federal salaries, training, and travel. In addition, some services include the cost of environmental assessments and land boundary surveys in their privatization support costs. As the services sign the contracts to privatize most of their family housing units, service officials said their privatization support costs would decline as the need for consultants diminishes. While these costs are expected to decline, other assistance costs for portfolio management services for the privatization program are expected to become a key component of the remaining support costs as more projects are completed. As figure 2 shows, the services project sharp declines in privatization support and consultant costs after fiscal year 2004. The military services are not consistent in their definitions for privatization support and consultant costs. The differences in the services’ definitions for privatization support costs result in inconsistent budgeting for these costs. Also, the differences in the services’ definitions for consultant costs result in inconsistent reporting of consultant costs in the department’s quarterly housing privatization report to Congress. Furthermore, OSD does not report its own program consultant costs in the quarterly report. Since OSD had not defined privatization support costs when it gave the services operational responsibility for the program in 1998, the services individually defined them, resulting in inconsistencies in the types of costs included in the services’ budgeting for privatization support. The Navy, for example, does not include the costs of environmental assessments and land boundary surveys as privatization support costs while the Army and the Air Force do. Similarly, the Army and the Navy do not include the costs for supervision, inspection, and overhead construction activities as privatization support costs while the Air Force does. Without a common definition, these differences in accounting lead to an increased variance in the services’ reported costs and add difficulty for DOD and Congress to accurately determine total privatization support costs across the services. According to officials in the Office of the Under Secretary of Defense (Comptroller), DOD does not have written budget guidance defining what types of privatization support costs should be included in the services’ budget estimates. Thus, the services account for housing privatization support costs differently. For example, according to Navy officials, the Navy’s privatization support budget account does not include costs for activities that the other services do, such as environmental assessments and land boundary surveys. As such, the Navy’s privatization support expenses may not be as low as they appear in its budget. Navy has combined the management of the family housing program with its real estate, acquisition, and construction contracting expertise in the Naval Facilities Engineering Command—the command responsible for military construction. Thus, the costs for environmental assessments, land boundary surveys, and supervision, inspection, and overhead construction activities are part of how the command conducts its mission and are not captured in the Navy’s privatization support budget. According to Navy officials, these activities are conducted and funded within the command and a budget request distinction is not made as to whether the costs for these activities are for a privatization housing project or a traditional military construction project. For example, Navy’s estimated $5 million costs for environmental assessments for its privatization housing efforts through 2008 will not be reflected in its privatization support account although this cost is in the other services’ privatization support accounts. Similarly, the Army’s expenses for construction supervision, inspection, and overhead activities are part of the developers’ costs; and the Army does not reflect these costs in its privatization support budget, whereas the Air Force does. DOD officials said that the budget inconsistencies have created a problem for the services. According to DOD officials, Congress has reduced the Army and the Air Force privatization support budgets due to the perception that their budgets are unreasonably high when compared with the Navy’s. Because OSD had not defined the types of costs to be included in determining consultant costs, the services define them differently, resulting in inconsistent reporting of consulting expenditures in the department’s quarterly housing privatization report to Congress. Specifically, the services are beginning to contract for assistance in managing the portfolio of housing privatization projects to better ensure long-term program success. The Air Force views portfolio management as a contractor cost and, as such, is not including this expense in its consultant cost data to OSD for the quarterly housing privatization report. In contrast, the Army, the Navy, and the Marine Corps view portfolio management as a consultant cost; and this expense is included in the report to Congress. As a result, OSD is providing inconsistent service data regarding consultant costs in the department’s quarterly housing privatization report to Congress. Furthermore, as costs for portfolio management are expected to become a key component of remaining support costs as the services privatize more housing, the inconsistent cost reporting will become more pronounced in the future. Also, important in explaining inconsistencies and variances in consultant costs among the services is the organizational placement of the privatization program and the number of projects per service. For instance, the Navy’s consulting costs are less than the other services because it has combined the management of its program with its real estate, acquisition, and construction contracting expertise in the Naval Facilities Engineering Command. According to Navy and OSD officials, that decreases the Navy’s need for consultants. Then again, according to Air Force officials, the Air Force’s consulting costs are higher than the other services, when its contractor’s portfolio management costs are included, because it has more privatization projects needing consultant assistance and advice. Currently, the Air Force plans on 53 family housing privatization projects whereas the Army and the Navy are planning on 27 and 37 projects, respectively. The services reported in the quarterly housing privatization report to Congress that they had spent about $73 million, in total, on consultants associated with its housing privatization efforts as of March 31, 2003 (see table 2). The extent of their expenditures varied, with the Army expending $34 million, more than twice the amount expended by the Navy and the Marine Corps. OSD does not, and is not required to, include its own costs for consultants associated with its implementation of the military housing privatization program in the quarterly report to Congress. Officials within OSD’s Housing and Competitive Sourcing Office stated that OSD has not reported about $10 million in consultant costs since the beginning of the program in 1996. These consultant costs were not in direct support of a particular installation and most occurred when OSD had centralized control over the program. With the transfer of operational responsibility for the program to the individual services in 1998, OSD’s consultant costs have decreased significantly, currently averaging about $1 million a year. These consultant costs are mostly to assist OSD design program evaluation criteria and to help with budget scoring requirements. Although housing privatization fees paid to individual consultants vary among the services, several factors limit an evaluation and comparison of these fees. Such factors include the differences in labor categories, hours, and skills mix that each consulting firm can use to describe the work they need to do to accomplish the work specified by the services, such as the following: Labor categories. Despite some commonalities (e.g., program manager and financial analyst), the services for the most part list different labor categories and staff positions in their consultant contracts. The Air Force, for example, identified 22 labor categories for each of its five consultants, while the Navy and Army listed 5 and 7 labor categories, respectively. Labor and hour mixes. Each consulting firm generally emphasizes a different mix of staff and anticipated number of labor hours, depending on the needed work. As such, contracting with a consultant with lower hourly fees will not necessarily result in the lowest total cost because the different consultant firms use a mix of staff with varying hourly pay rates and charge different hours to complete the work. Air Force data, for example, showed that one firm, which charges higher average hourly fees, planned to dedicate fewer labor hours to a proposed task than another firm, which charges a lower average hourly fee. The particular mix of staff and labor hours proposed by both firms led to only a 3 percent cost variance for a proposed project of about $780,000. In addition, Air Force data showed that two firms proposed that its senior managers dedicate considerably fewer hours to the project although charging higher hourly fees, while another firm proposed that its senior managers dedicate considerably more hours to the project but charge significantly lower hourly fees. Thus, a comparison of consultant fees in isolation could create a misleading assessment. Scope of work. Different scopes of work within the various housing privatization projects may generate different labor mixes or entirely new labor categories for a particular consultant, making comparison difficult. For example, the Air Force uses two different sets of labor categories for the same firm—one for the portfolio management work and another, which is slightly different, for the privatization support work. Capacities. Consulting firms have different capacities—some are small businesses while others are global enterprises—and each firm has different capabilities and expertise. According to Air Force data, for example, the firms charging the lowest average hourly fee at the managerial level have only six Air Force family housing privatization projects between them. However, Air Force officials told us they believe these firms are small businesses operating at capacity and cannot take on another project, despite having lower fees than some of the other consulting firms. Even though these factors limit a comparative evaluation of consultant fees, service officials told us they believe that their particular consultant fees are fair and reasonable because they (1) awarded their consultant contracts competitively; (2) examined consulting rates published by the General Services Administration, particularly those in its Management, Organizational, and Business Improvement Services Schedule, to assist in determining if the rates were reasonable; and (3) selected consultants through “best value” determinations. In striving to obtain best value, service officials said that the services select firms offering the most advantageous deal to the government and that cost is only one of several evaluation considerations. Past performance and the capability to perform the proposed work, among other considerations, are evaluated alongside fee considerations in assessing contract awards. As a result, service officials said that they have contracted with firms that provide the best value to the government based on their needs. The military housing privatization program was established for a faster creation of quality housing for military servicemembers and their families. As such, the Secretary of Defense has directed the military services to increase their use of privatization and eliminate their inadequate housing inventory, moving the completion date for the privatization up from 2010 to 2007. However, until the number of renovated or newly constructed housing units under privatization are routinely tracked and reported to Congress, it will be difficult to adequately assess the impact of the privatization program. Further, as the program progresses and additional privatized units are expected to be under contract, more complete and informative data on the number of privatized housing units that have been renovated or newly constructed will become increasingly important to decision makers. Such data are needed to determine how quickly the privatization program is creating adequate family housing and improving the living conditions of servicemembers and their families. Until OSD provides a common definition of the types of cost to be included in determining privatization support costs, including consultant costs, the military services will continue to budget inconsistently for privatization support costs and OSD will continue to use inconsistent data from the services to report consultant costs in its quarterly housing privatization report to Congress. Similarly, without an OSD determination of whether portfolio management costs are costs that should be included as consultant costs, the services will continue to provide OSD with inconsistent data on consultant costs for its quarterly report to Congress. Furthermore, until OSD includes its own program consultant costs in the department’s quarterly housing privatization report, Congress will not have complete knowledge of the total housing privatization consultant costs. Without consistent and complete information, Congress and DOD cannot make the most informed decisions regarding the appropriateness of support and consultant costs requested and expended in support of the military housing privatization program. To illustrate the number of inadequate housing units eliminated and of new or renovated units brought on line through the military housing privatization program, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics to track the supporting data and report the number of privatized units renovated and newly constructed to the Congress on a periodic basis. To provide for more consistent and complete data on military housing privatization support costs, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller), in consultation with the Housing and Competitive Sourcing Office, to define privatization support costs for the military services. Specifically, this definition should address the differences in how the services consider the costs of environmental assessments; land boundary surveys; and supervision, inspection, and overhead construction activities associated with the housing privatization program. To provide for more consistent and complete data on privatization consultant costs, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics, in consultation with the Under Secretary of Defense (Comptroller), to (1) define consultant costs, including a determination of the inclusion of portfolio management costs, for the military services; and (2) include OSD’s own program consultant costs associated with its efforts to privatize military housing in the department’s quarterly housing privatization report to Congress. In written comments on a draft of this report, the Director for Housing and Competitive Sourcing within the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics agreed with our recommendations, stating the department is or will be taking steps to implement them. In reference to our recommendation to track supporting data and report on the number of privatized units renovated and newly constructed, DOD concurred with the recommendation, stating it is essential that project progress be monitored. However, DOD stated that its semi-annual Program Evaluation Plan report is a more appropriate vehicle to track this data than the quarterly reporting specified in our draft report and has initiated steps to do so. We believe the collection and periodic reporting of this data to the Congress, regardless of the reporting format, will benefit decision makers to better assess the housing privatization program’s progress in creating adequate family housing and improving the living conditions of the servicemembers and their families. Accordingly, we modified our recommendation to recognize the potential for greater flexibility in reporting. DOD’s comments are included in appendix I of this report. We performed our work at the headquarters offices responsible for implementing the privatization program for the Army, the Navy, the Marine Corps, and the Air Force. At each location, we interviewed officials cognizant of the program and reviewed applicable policies, procedures, and documents. We also interviewed officials at the Air Force Center for Environmental Excellence in San Antonio, Texas, which has responsibility for executing Air Force contracts for consultant assistance with the military housing privatization program. We also discussed our analyses with officials of OSD’s Housing and Competitive Sourcing Office and the Office of the Under Secretary of Defense (Comptroller). For the military housing privatization program, our analyses mostly covered 1996—the beginning of the military housing privatization program—through 2008 when the services expect to have privatized all of their planned housing. To determine the number of projects and family housing units the services have privatized and project to privatize since program inception to fiscal year 2008, we interviewed service officials and obtained relevant data. We obtained data for the number of projects and units already privatized from OSD’s Military Housing Privatization Initiative Housing Privatization Report to Congress. However, because project execution schedules for future projects change regularly and the services told us several future project dates are tentative, we requested the latest estimates of projects and units to be privatized from the services. Army and Air Force officials provided us with their privatization schedules while Navy officials told us to use their fiscal year 2004 budget request data. In addition, the services provided data on the number of units newly constructed or renovated as of March 31, 2003, but stated that estimated data was not readily available for fiscal years 2004 through 2008. To identify the portion of privatization support costs used for consultants, we obtained and analyzed budget data from the services for actual and projected amounts covering fiscal years 1996 through 2008. The services identified those activities that they considered to be a privatization support cost and consultant cost. We did not validate these recorded budget amounts. To analyze the services’ consistency in defining privatization support and consultant costs, we compared budget data provided by the services and noted differences in what they considered privatization support and consultant costs. We met with service officials to discuss those differences and possible reasons for these differences. To report data on the services’ cumulative expenditures as of March 31, 2003, for military housing privatization consultants, we used the department’s latest quarterly housing privatization report dated April 2003. We interviewed OSD and service officials about the reporting requirements for the quarterly housing privatization report and corresponding budget guidance on privatization support and consultant costs. In addition, we met with officials from the Office of the Under Secretary of Defense (Comptroller) to obtain their views on our privatization support and consultant cost analyses. To assess how consultant fees for the military housing privatization program compare among the services, we reviewed and analyzed the services’ consultant contracts and individual task orders, noting the hourly fees charged by each consultant. We obtained data from the appropriate General Services Administration federal supply schedule for Management, Organizational, and Business Improvement Services Schedule and made fee comparisons. We also interviewed service officials to discuss their assessment process for evaluating consultant fees and selecting consultants. Finally, we interviewed officials from the Air Force’s Brooks City Base, San Antonio, Texas, and from the Army’s Fort Sam Houston, San Antonio, Texas, to discuss DOD’s use of consultants in similar privatization activities. In performing this review, we did not validate DOD’s reported housing requirements or privatization information. We conducted our work from April 2003 through July 2003 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense, the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report is available at no charge on GAO’s Web site at www.gao.gov. Please contact me on (202) 512-8412, or my Assistant Director, Mark Little, at (202) 512-4673 if you or your staff have any questions regarding this report. Major contributors to this report were Laura Talbott, Shawn Arbogast, Jason Aquino, Jane Hunt, and R.K. Wild.
In 2000, Congress required the Department of Defense (DOD) to report quarterly on the services' expenditures for consultants in support of the military family housing privatization programs. GAO was asked to review the costs of the consultants DOD used to support privatizing housing for servicemembers and their families. This report discusses (1) the number of family housing units the services have privatized, particularly newly constructed or renovated units, and project to be privatized by fiscal year 2005; (2) the portion of privatization support costs used for consultants; (3) the services' consistency in the definition for privatization support and consultant costs; and (4) factors that limit an evaluation of how consultant fees for the military housing initiative compare among the services. Although DOD reported to Congress that the services plan to privatize most of their family housing by fiscal year 2005, DOD's reports do not provide decision makers with the number of privatized units that have been renovated or newly constructed. As of March 2003, the services had contracts privatizing about 28,000 family housing units and planned to privatize 140,000 units by fiscal year 2005. As a result of this privatization, about 7,600 units had been constructed or renovated. It can take developers several years to renovate existing housing or construct new units after they are privatized. As the program progresses, it will become increasingly important to have complete data on which to determine how quickly the privatization program is creating adequate family housing. Costs for consultants are less than half of the services' privatization support costs. The services anticipate many privatization support and consultant costs to peak in fiscal year 2004 when the need for consultants diminishes once most privatization contracts are signed. Remaining support costs will then focus increasingly on managing the portfolio of the privatized housing. The services are not consistent in their definitions for privatization support and consultant costs. The differences in the services' definitions for privatization support costs result in inconsistent budgeting for these costs. Also, the differences in the services' definitions for consultant costs result in inconsistent reporting of consultant costs in the department's quarterly housing privatization report to Congress. Further, the Office of the Secretary of Defense does not report its own program consultant costs in the quarterly report. Several factors, such as differences in labor categories, hours, and skills mix that each consulting firm can use to accomplish work, limited our evaluation of how consultant fees for the military housing initiative compare among the services. Even though these factors hinder a comparative evaluation of consultant fees, service officials told us they believe that they have contracted with firms that provide the best value to the government based on their needs and that the consultants' fees are fair and reasonable.
ONDCP was established by the Anti-Drug Abuse Act of 1988 to, among other things, enhance national drug control planning and coordination and represent the drug policies of the executive branch before Congress. In this role, the office is responsible for (1) developing a national drug control policy, (2) developing and applying specific goals and performance measurements to evaluate the effectiveness of national drug control policy and National Drug Control Program agencies’ programs, (3) overseeing and coordinating the implementation of the national drug control policy, and (4) assessing and certifying the adequacy of the budget for National Drug Control Programs. ONDCP is required annually to develop the National Drug Control Strategy, which sets forth a plan to reduce illicit drug use through prevention, treatment, and law enforcement programs, and to develop a National Drug Control Program Budget for implementing the strategy. National Drug Control Program agencies follow a detailed process in developing their annual budget submissions for inclusion in the National Drug Control Program Budget, which provides information on the funding that the executive branch requested for drug control to implement the strategy. Agencies submit to ONDCP the portion of their annual budget requests dedicated to drug control, which they prepare as part of their overall budget submission to the Office of Management and Budget for inclusion in the President’s annual budget request. ONDCP reviews the budget requests of the drug control agencies to determine if the agencies have acceptable methodologies for estimating their drug control budgets, and includes those that do in the Drug Control Budget. In FY 2016, the budget contained 38 federal agencies or programs. There are five priorities for which resources are requested across agencies: substance abuse prevention and substance abuse treatment (which are considered demand-reduction areas), and domestic law enforcement, drug interdiction, and international partnerships (which are considered supply-reduction areas) as shown in figure 1. ONDCP manages and oversees two primary program accounts: the High Intensity Drug Trafficking Areas (HIDTA) Program and the Other Federal Drug Control Programs, such as the DFC Support Program. ONDCP previously managed the National Youth Anti-Drug Media Campaign which last received appropriations in fiscal year 2011. Also, from fiscal year 1991 to fiscal year 2011, ONDCP managed the Counterdrug Technology Assessment Center (CTAC). According to ONDCP, federal drug control spending increased from $21.7 billion in FY 2007 to the approximately $27.5 billion that was allocated for drug control programs in FY 2017 as shown in figure 2. Spending on supply reduction programs, such as domestic law enforcement, interdiction, and international programs increased 16 percent from $13.3 billion in FY 2007 to $15.4 billion in FY 2017. However, federal spending for demand programs—treatment and prevention— increased at a higher rate from FY 2007 through FY 2017. Spending in these two programs increased 44 percent from $8.4 billion in FY 2007 to $12.1 billion in FY 2017. As a result, the proportion of funds spent on demand programs increased from 39 percent of total spending in FY 2007 to 44 percent in FY 2017. According to ONDCP’s National Drug Control Budget Fiscal Year 2018 Highlights, the proposed budget supports $1.3 billion in investments authorized by the Comprehensive Addiction and Recovery Act (CARA), the 21st Century Cures Act, and other opioid-specific programs to help address the opioid epidemic, including funding prevention and treatment efforts. Allocated funding for treatment increased in FY 2017 to approximately $10.6 billion, a 7.5 percent increase over FY 2016. Funding for prevention increased slightly in FY 2017 to about $1.5 billion, a 1.4 percent increase from FY 2016. According to its FY 2018 Budget Highlights document, ONDCP considers three main functions to address the drug supply: Domestic Law Enforcement, Interdiction, and International. For Domestic Law Enforcement, ONDCP noted that federal, state, local, and tribal law enforcement agencies play a key role in the Administration’s approach to reduce drug use and its associated consequences. ONDCP also stated that interagency drug task forces, such as the HIDTA program, are critical to leveraging limited resources among agencies. Allocated funding for domestic law enforcement in FY 2017 is approximately $9.3 billion, which is similar to its FY 2016 spending level. According to ONDCP, the United States continues to face a serious challenge from the large-scale smuggling of drugs from abroad which are distributed to every region in the nation. Interdiction funds support collaborative activities between federal law enforcement agencies, the military, the intelligence community, and international allies to interdict or disrupt shipments of illegal drugs, their precursors, and their illicit proceeds. Allocated funding in support of Interdiction for FY 2017 is approximately $4.6 billion, a decrease of 3.5 percent from FY 2016. International functions place focus on collaborative efforts between the U.S. government and its international partners around the globe. According to ONDCP, illicit drug production and trafficking generate huge profits and are responsible for the establishment of criminal networks that are powerful, corrosive forces that destroy the lives of individuals, tear at the social fabric, and weaken the rule of law in affected countries. In FY 2017, approximately $1.5 billion was allocated to international functions, which is similar to its FY 2016 spending level. As we previously have stated, the 2010 National Drug Control Strategy was the inaugural strategy guiding drug policy under the previous Administration. According to ONDCP officials, it sought a comprehensive approach to drug policy, including an emphasis on drug abuse prevention and treatment efforts and the use of evidence-based practices— approaches to prevention or treatments that are based in theory and have undergone scientific evaluation. ONDCP established two overarching policy goals in the 2010 Strategy for (1) curtailing illicit drug consumption and (2) improving public health by reducing the consequences of drug abuse, and seven overall sub goals under them that delineate specific quantitative outcomes to be achieved by 2015, such as reducing drug- induced deaths by 15 percent. To support the achievement of these two policy goals and seven sub goals (collectively referred to as overall goals), the Strategy included seven strategic objectives and multiple action items under each objective, with lead and participating agencies designated for each action item. Strategy objectives include, for example, “Strengthen Efforts to Prevent Drug Use in Communities” and “Disrupt Domestic Drug Trafficking and Production.” Subsequent annual Strategies provided updates on the implementation of action items, included new action items intended to help address emerging drug- related problems, and highlighted initiatives and efforts that support the Strategy’s objectives. In March 2013, we reported that ONDCP and the federal agencies lacked progress on achieving the Strategy goals and were in the process of implementing a new mechanism to monitor progress. As we reported in May 2016, ONDCP and the federal agencies had made moderate progress toward achieving one goal, limited progress on three goals, and no demonstrated progress on the remaining three goals. For example, we reported that the rate of drug use for young adults aged 18 to 25 had increased since 2009, moving in the opposite direction of the goal. However, we also reported that HIV infections attributable to drug use, one of the strategy’s sub-measures, had decreased from 2009 to 2014 and had exceeded the strategy’s established target. In many instances, the data used to assess progress, while the most up to date at the time, were several years old. Based on the most recent data available, although some of the sub-measures, such as decreasing tobacco use by eighth graders, were achieved, none of the seven overall goals in the Strategy have been fully achieved as of July 2017. Table 1 shows the 2010 Strategy goals and progress toward meeting them as of July 2017. Federal drug control agencies made mixed progress but did not fully achieve any of the four overall Strategy goals associated with curtailing illicit drug consumption. For example: Progress was made on the goal to decrease the 30-day prevalence of drug use among 12- to 17-year-olds by 15 percent. The data source for this measure— SAMHSA’s National Survey on Drug Use and Health (NSDUH)— indicated that in 2015, 8.8 percent of 12- to 17- year-olds reported having used illicit drugs in the past month. Progress was not made on the goal to decrease the 30-day prevalence of drug use among young adults aged 18 to 25 by 10 percent. Specifically, the reported rate of drug use for young adults was 21.4 percent in 2009 and 22.3 percent in 2015, moving in the opposite direction of the goal. Marijuana remained the drug used by the highest percentage of young adults. According to the 2015 NSDUH, 19.8 percent of young adults reported having used marijuana in the past month. The rates of reported marijuana use for this measure increased by 9 percent from 2009 to 2015. Progress was also mixed on the remaining three overall Strategy goals associated with reducing the consequences of drug use. For example: Progress was not made on the goal to reduce drug-induced deaths by 15 percent. According to the CDC’s National Vital Statistics System, which collects information on all deaths in the United States, 55,403 deaths were from drug-induced causes in 2015, an increase of 41.5 percent compared to 2009 and 66.5 percent more than the 2015 goal. The CDC’s December 30, 2016 Morbidity and Mortality Weekly Report stated that 52,404 of these deaths were from drug overdoses, the majority of which (63 percent) involved opioids. The goal to reduce drug-related morbidity by 15 percent has two sub- measures, and progress had been made on one but not the other. Specifically, HIV infections attributable to drug use decreased by 29 percent from 2010 to 2015, exceeding the established target. However, the number of emergency room visits for substance use disorders increased by 19 percent from 2009 to 2011. The data source for this measure— SAMHSA’s Drug Abuse Warning Network—indicated that pharmaceuticals alone were involved in 34 percent of these visits and illicit drugs alone were involved in 27 percent of them. According to the 2013 Drug Abuse Warning Network report, the increase in emergency room visits for drug misuse and abuse from 2009 to 2011 was largely driven by a 38 percent increase in visits involving illicit drugs only. To advance the national dialogue on preventing illicit drug use, including preventing individuals from using illicit drugs for the first time, we convened and moderated a diverse panel of health care, education, and law enforcement experts, including from ONDCP, on June 22, 2016. The panel focused on (1) common factors related to illicit drug use; (2) strategies in the education, health care, and law enforcement sectors to prevent illicit drug use; and (3) high priority areas for future action to prevent illicit drug use, and our November 2016 report summarized the themes from the forum. Forum participants identified a number of common factors related to illicit drug use. For example, the participants agreed that first time illicit drug use typically starts in adolescence and typically involves marijuana, however, prescription pain relievers are increasingly a pathway to illicit drug use. Other common factors include: a family history of substance abuse, conflict within the family, and the early onset of anxiety disorders or substance use, among others. Forum participants also noted several strategies available in the education, health care, and law enforcement sectors for preventing illicit drug use: Education. Forum participants championed the use of school-or community-based prevention programs that research has shown to be successful in preventing illicit drug use and other behaviors. These programs include: Life Skills, Strengthening Families Program: For Parents and Youth 10-14, and Communities That Care. These programs focus generally on combatting a range of risky behaviors, giving participants skills to recognize and manage their emotions, and strengthening family and community ties. Health care. Forum participants identified and discussed three principle health care strategies for preventing illicit drug use: (1) having providers adhere to the CDC’s guideline for prescribing opioids for chronic pain, (2) having providers use prescription drug monitoring programs (PDMP)—state-run electronic databases used to track the prescribing and dispensing of prescriptions for controlled substances—and (3) having primary care providers screen and intervene with patients at risk for illicit drug use. Law Enforcement. Forum participants identified four law enforcement strategies for preventing illicit drug use: (1) enforcing laws prohibiting underage consumption of alcohol and tobacco, (2) building trust between law enforcement and local communities, (3) using peers to promote drug-free lifestyles, and (4) closing prescription drug “pill mills” — medical practices that prescribe controlled substances without a legitimate medical purpose—and other efforts to reduce the supply of illicit drugs. Forum participants also identified several high priority areas for future action to help prevent illicit drug use, including the misuse of prescription drugs. Some examples include: supporting community coalitions comprising the health care, education, and law enforcement sectors that work in concert to prevent illicit drug use at the local level; consolidating federal funding streams for multiple prevention programs into a single fund used to address the risk factors for a range of unhealthy behaviors, including illicit drug use; increasing the use of prevention programs that research has shown to be effective, such as those that are well-designed and deliver persuasive drug prevention messages on a regular basis; identifying and pursuing ways to change perceptions of substance abuse disorders and illicit drug use, such as emphasizing that a substance abuse disorder is a disease of the brain and can be treated like other diseases; supporting drug prevention efforts in primary care settings, such as exploring ways to reimburse providers for conducting preventative drug screenings; and reducing the number of prescriptions issued for opioids. In February 2017, we issued a report on the Drug-Free Communities Support Program (DFC)—a program that ONDCP and SAMHSA jointly manage. This program aims to support drug abuse prevention efforts that engage schools, law enforcement, and other sectors of a community to target reductions in the use of alcohol, tobacco, marijuana, and the illicit use of prescription drugs. We examined the extent to which the two agencies (1) use leading processes to coordinate program administration and the types of activities funded, and (2) have operating procedures that ensure DFC grantee compliance and provide a basis for performance monitoring. In 2008 we had previously reported that ONDCP and SAMHSA needed to establish stronger internal controls and had not fully defined each agency’s roles and responsibilities for the management of the DFC program. In our February 2017 report, we found that ONDCP and SAMHSA had improved their joint management of the program. Specifically, we found that ONDCP and SAMHSA employed leading collaboration practices to administer the DFC program and fund a range of drug prevention activities. For example, ONDCP and SAMHSA had defined and agreed upon common outcomes, such as prioritizing efforts to increase participation from under-represented communities. The two agencies also had funded a range of DFC grantees’ activities and report on these activities in their annual evaluation reports. For example, ONDCP reported that from February through July 2014, grantees educated more than 156,000 youth on topics related to the consequences of substance abuse. Other examples of grantees’ efforts included those that enhanced the skill sets of community members, including parents, to identify drug abuse or limit access to prescription drugs and those that reduced language barriers precluding non-English speakers from understanding drug prevention campaigns. We also found that ONDCP and SAMHSA had operating procedures in place, but SAMHSA did not consistently follow documentation and reporting procedures to ensure grantees’ compliance and had not accurately reported to ONDCP on grantee compliance. Based on a file review we conducted, we found that SAMHSA followed all processes for ensuring that the grant applicants whose files we reviewed had submitted required documentation before SAMHSA awarded them initial grant funding. However, SAMHSA was less consistent in adhering to procedures for confirming documentation in later years of the program. We found that the majority of grantees whose files we reviewed were missing required paperwork to document how they planned to sustain their programs after grant funds expired. Prior to our review, ONDCP and SAMHSA officials were not aware of the missing data in the grant files. We concluded that without close adherence to existing procedures, and a mechanism to ensure that the documentation it reports to ONDCP is accurate and complete, SAMHSA’s performance monitoring capacity was limited. Moreover, SAMHSA could not be certain that grantees were engaging in intended activities and meeting their long-term program goals. We made recommendations that SAMHSA develop an action plan to strengthen the agency’s grant monitoring process and ensure ONDCP gets complete and accurate information, among other things. SAMHSA concurred with our recommendations and reported to us in April 2017 that it is implementing actions to address our recommendations that should be completed by this fall. Chairman Gowdy, Ranking Member Cummings, and Committee members, this concludes my prepared statement. I would be happy to respond to any questions you may have. For questions about this statement, please contact Diana Maurer at (202) 512-8777 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Aditi Archer (Assistant Director), Joy Booth (Assistant Director), Julia Vieweg, Sylvia Bascope, Jane Eyre, Stephen Komadina, Mara McMillen, David Alexander, Billy Commons, and Eric Hauswirth. Staff who made key contributions to the reports cited in this statement are identified in the source product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
According to the National Institute on Drug Abuse, in 2015, the most recent year for which national data are available, over 52,000 Americans died from drug overdoses, or approximately 144 people every day. Policymakers, criminal justice officials, health care providers, and the public at large are turning with renewed attention to the drug epidemic and its impact on our nation. To help reduce illicit drug use and its consequences, ONDCP oversees and coordinates the implementation of national drug control policy across the federal government. This statement addresses: (1) the federal government's progress in achieving Strategy goals, (2) results from a Comptroller General's Forum on preventing illicit drug use, and (3) the findings of GAO's recent review of ONDCP's DFC Support program. This statement is based on GAO's prior work issued from May 2016 through February 2017, with selected status updates as of July 2017, and updates from ONDCP's National Drug Control Budget Funding Highlight reports issued from fiscal year 2016 to fiscal year 2018. For the updates, GAO used publically available data sources that ONDCP uses to assess its progress on Strategy goals, and interviewed ONDCP officials. The federal government has made mixed progress toward achieving the goals articulated in the 2010 National Drug Control Strategy (Strategy). In the Strategy, the Office of National Drug Control Policy (ONDCP) established seven goals related to reducing illicit drug use and its consequences by 2015. In many instances, the data used to assess progress in 2015 have only recently become available. GAO's review of this updated data indicates that, as of July 2017, the federal government made moderate progress toward achieving two goals, limited progress on two goals, and no progress on the other three goals. However, none of the overall goals in the Strategy were fully achieved. For example, progress had not been made on the goal to reduce drug-induced deaths by 15 percent. Drug-induced deaths instead increased from 2009 to 2015 by 41.5 percent. Although progress was made reducing the 30-day prevalence of drug use among 12- to 17-year-olds from the 10.1 percent reported in 2009, the goal of reducing prevalence to 8.6 percent by 2015 was not achieved. According to ONDCP, as of July 2017, work is currently underway to develop a new strategy. In June 2016, GAO convened a diverse panel of experts, including from ONDCP to advance the national dialogue on preventing illicit drug use. The panel focused on (1) common factors related to illicit drug use; (2) strategies in the education, health care, and law enforcement sectors to prevent illicit drug use; and (3) high priority areas for future action to prevent illicit drug use. According to forum participants, illicit drug use typically occurs for the first time in adolescence, involves marijuana, and increasingly, legal prescriptions for opioid-based pain relievers. Forum participants also discussed strategies available in the education, health care, and law enforcement sectors for preventing illicit drug use. For example, forum participants championed the use of school- or-community-based prevention programs that research has shown to be successful in preventing illicit drug use and other behaviors. They also identified several high priority areas for future actions to prevent illicit drug use, including: supporting community coalitions, consolidating federal funding streams for prevention programs, and reducing the number of opioid prescriptions. In February 2017, GAO issued a report on the Drug-Free Communities Support Program (DFC)—a program that ONDCP and the Substance Abuse and Mental Health Services Administration (SAMHSA) jointly manage. This program aims to support drug abuse prevention efforts that engage schools, law enforcement, and other sectors of a community to target reductions in the use of alcohol, tobacco, marijuana, and the illicit use of prescription drugs. GAO reported that ONDCP and SAMHSA had strengthened their joint management of the program by employing leading collaboration practices; however, the agencies could enhance DFC grantee compliance and performance monitoring. For example, SAMHSA did not consistently confirm grantees had completed plans to achieve long-term goals after exiting the program. GAO recommended that SAMHSA develop an action plan to strengthen DFC grant monitoring and ensure it sends complete and accurate information to ONDCP. SAMHSA concurred with GAO's recommendations and reported in April 2017 that its actions to address them should be completed by this fall.
Transit fringe benefits—employer-provided benefits designed to encourage public and private employees to use mass transit for their home-to-work commute—date back to the early 1990s. The Energy Policy Act of 1992 created a new category of qualified fringe benefits— the “qualified transportation fringe”—that is excludible from gross income. Executive Order 13150, dated April 26, 2000, required the implementation of a transportation-fringe-benefit program for qualified federal employees, in which federal agencies offer to employees transit benefits excludable from gross income. This benefit includes transit vouchers and passes for public transportation, to be used exclusively to cover actual out-of-pocket commuting expenses, not to exceed a maximum monthly allowable dollar limit set by law, which has been adjusted for inflation over the years and currently is $130. Federal agencies can either distribute transit benefits directly to employees, enter into an interagency agreement with another agency, such as DOT, or contract with a private company for distribution. DOT’s transit benefit program is administered by TRANServe, located within the Office of the Assistant Secretary for Administration. In 1998, TRANServe began offering transit benefit distribution services to other federal entities participating in the federal government’s transit benefit program. Over time, TRANServe has distributed transit benefits in a variety of forms. Prior to 2011, TRANServe had distributed the benefits to participating federal employees via paper fare media—i.e., paper vouchers and paper transit passes (e.g., Metro transit vouchers in the District of Columbia)— and smart cards (e.g., electronic transit cards). In March 2011 and April 2012, DOT published notices for public comment of its intention to adopt a new distribution methodology for transit benefits. Specifically, DOT proposed implementing electronic fare media—a debit card—in place of paper vouchers where electronic fare media is accepted by transit authorities. In its notice, DOT indicated that the move toward debit cards was the result of a growing number of state and local transit authorities transitioning to electronic fare media and rising paper voucher program costs. TRANServe indicated that electronic fare media provides a way to tighten internal controls and support the green government movement, which entails implementing more environmentally-friendly practices. Since 2011, the portion of transit benefits distributed via debit cards has increased while the portion distributed as paper vouchers has declined (see fig. 1). In fiscal year 2014, TRANServe distributed over $210 million dollars in cash equivalent fare media to over 202,000 transit-benefit participants employed by 106 federal entities (referred to as client agencies) nationwide, mostly through the TRANServe debit card. TRANServe administers transit benefits for more federal entities than any other program administrator in the federal sector.percent of the 202,000 TRANServe participants were using the debit card. As discussed in the Standards for Internal Control in the Federal Government, internal control comprises the plans, methods, and procedures used by entities to meet their missions, goals, and objectives. The phrase “internal control” does not refer to a single event, but rather a series of actions and activities that occur throughout an entity’s operations on an ongoing basis and that serve as the first line of defense in safeguarding assets and preventing and detecting errors and fraud.Moreover, internal controls should be designed to provide reasonable assurance that unauthorized acquisition, use, or disposition of an entity’s assets will be promptly detected. Lastly, internal control systems help government program managers achieve desired results through effective stewardship of public resources. In 2007, the Office of Management and Budget (OMB) provided guidance for federal agencies to use in establishing and implementing internal controls over their respective transit benefit programs to help managers reduce the opportunity for fraud, waste, and abuse. its guidance was in response to our 2007 testimony that confirmed allegations that federal employees in the National Capital Region committed fraud by deliberately requesting benefits they were not entitled to and then selling or using these benefits for personal gain. As a result of our investigation and testimony, OMB required all federal agencies to implement several internal controls in order to maintain the integrity of the transit benefit program. OMB issued a memorandum, Federal Transit Benefit Program, M-07-15 (May 14, 2007), to the executive departments and agencies requiring all agencies with transit benefit programs to implement several internal controls designed to deter fraud, waste, and abuse. used at merchants that have been assigned a code indicating they sell fare media. IRS described two other scenarios, one involving a smart card and another involving a debit card where employees do not substantiate their transit fare expenses, scenarios that we did not describe. the electronic media. Subsequently, IRS sent out a notice for public comment on this revenue rule and modified it in 2014. DOT’s TRANServe debit-card program includes activities that correspond to the five internal control standards—(1) control environment, (2) risk assessment, (3) control activities, (4) monitoring, and (5) information and communication. In combination, these activities would be expected to provide reasonable assurance that non-transit-related purchases can be identified and denied. Based on our review of the design of TRANServe’s internal controls for the TRANServe debit-card program, we found that those internal controls align with GAO’s Standards for Internal Control in the Federal Government. However, certain weaknesses could exist as we did not independently test DOT’s internal controls to determine whether they mitigate all possible risks and are operating as intended. TRANServe’s activities for establishing a control environment—a disciplined work environment and ethical culture—amongst management and staff were generally consistent with internal control standards. The Standards for Internal Control in the Federal Government states that management and employees should establish and maintain an environment that sets a positive and supportive attitude toward internal control and conscientious management. A positive control environment is the foundation for all other standards, a foundation that provides discipline and structure as well as the climate that influences the quality of internal control. Several key factors affect the control environment, e.g., the integrity and ethical values maintained and demonstrated by management and staff. TRANServe has established a control environment framework for the debit-card program through the following: A primary goal for the debit-card program: TRANServe has set a primary goal for the debit-card program of offering enhanced internal controls to preserve transit benefits by deterring waste, fraud, and abuse. Internal controls officer: TRANServe created an internal controls officer position in April 2007, which according to officials, has been staffed since 2007 without vacancies. According to the program’s policy and guidance, this position heightens review of the program’s internal controls. The internal controls officer is responsible for maintenance and testing of internal controls, through a combination of inquiry, inspection, and observation. Also, the internal controls officer is responsible for designing training classes for TRANServe employees and DOT’s transit benefit participants. Training: DOT requires all staff who are participants in the transit benefit program, to complete mandatory Transit Benefit Integrity Awareness Training on an annual basis. This is a mandatory electronic course that clarifies transit benefit requirements and emphasizes the internal controls in place to minimize fraud and address ramifications of noncompliance. Additionally, according to TRANServe officials, the training is available on its website for client agencies. Moreover, if requested, the TRANServe staff are available to assist or conduct the training. Online resources: Finally, TRANServe has a number of resources available for internal and external participants and client agency points of contacts. The TRANServe website houses information such as, among others, best practices for internal controls, policies and procedures, and training. This helps to improve agency-level internal controls, thereby strengthening the combined level of internal controls. TRANServe’s activities for assessing and identifying relevant risks, and determining how those risks should be managed, were generally consistent with internal control standards. The Standards for Internal Control in the Federal Government states that internal control should provide for an assessment of the risks the agency faces from both internal and external sources. Risk assessment is the identification and analysis of relevant risks associated with achieving agency objectives. According to TRANserve officials, a formal risk assessment for the TRANServe program has not been conducted; however, this standard states that risk identification methods may include, among others, consideration of findings from other assessments, such as the Federal Managers Financial Integrity Act of 1982 (FMFIA) annual assessment.The following TRANServe activities are related to assessing risk. Internal controls officer: According to the program’s policy and guidance, the internal controls officer is responsible for examining current internal control activities and, identifying potential program vulnerabilities, through testing of controls related to the debit card. Monitoring: According to TRANServe’s standard operating procedure (SOP), monitoring debit card transactions identifies those transit benefit participants who are possibly misusing the debit card. As stated in the SOP, monitoring debit card transactions is performed on a weekly basis; this frequency is necessary to maintain program integrity and prevent the misuse of debit cards for non-acceptable transaction activity. Monitoring activities will be discussed in greater detail later under the monitoring standard. FMFIA annual assessment: DOT’s annual assessment of its internal control and financial-management systems, as required by the FMFIA, is intended to provide reasonable assurance that objectives are being met. Those objectives include whether (1) financial and other resources are safeguarded from unauthorized use or disposition; (2) transactions are executed in accordance with authorizations; (3) records and reports are reliable; (4) applicable laws, regulations and policies are observed; and (5) financial systems conform to government-wide standards. For fiscal year 2014, DOT’s Agency Financial Report stated that DOT utilized its standardized FMFIA internal control program approach for managing internal control and compliance activities. This approach included using the five standards of internal controls to identify, assess, document, and communicate key programmatic internal controls and related risks or weaknesses. For its part, the Office of Financial Management and Transit Benefits, which includes the TRANServe program, completes an annual assessment of the program’s management controls and financial-management systems. In fiscal year 2014, TRANServe reported on a number of activities, including testing of controls related to the debit card; reviewing all SOPs to incorporate best practices and tighten internal controls for external and internal customers; and providing monthly invoices with detailed reports to client agencies on employee participation in the transit benefit program. DOT reported no internal-control material weaknesses in the TRANServe program in its FMFIA assessments and audited consolidated financial statements for fiscal years 2011 through 2014. TRANServe’s activities for managing its internal control system are generally designed to be consistent with internal controls. The Standards for Internal Control in the Federal Government notes that control activities should be efficient and effective in accomplishing the agency’s control objectives, and should occur at all levels of the agency. The standards also note that the responsibility for good internal control rests with managers. Management sets the objectives, puts the control mechanisms in place, and monitors and evaluates the controls. Control activities are policies, procedures, techniques, and mechanisms that help ensure an agency’s objectives are met. The following describes the TRANServe program’s control activities. Standard operating procedures. TRANServe has established SOPs for the following program activities. These SOPs include: Conducting debit card transaction data mining:provides the guidelines for weekly data mining, which includes reviews of debit card transactions to identify potential misuse or irregular activity, such as the purchase of non-transit items. Sending “anomaly letters” (letters detailing misuse of the debit card) to client agencies: This SOP outlines procedures to use in transmitting anomaly letters to an agency once notification has been received that a potential misuse has occurred. Potential misuse, among other things, may involve retail merchants, or irregular transaction amounts. Providing debit card transaction anomaly reporting to the financial agent bank (J. P. Morgan): Specifically, the financial agent provides debit card services to electronically deliver transit benefits to federal employees of the client agencies serviced by TRANServe. The SOP outlines procedures that are to be taken once an anomaly notice is received indicating non-acceptable transaction activity. Internal controls officer: The internal controls officer is responsible for (1) examining current internal-control activities, including identifying potential program vulnerabilities; (2) developing solutions for identified vulnerabilities; (3) having knowledge of existing rules and regulations concerning internal controls; and (4) keeping abreast of new developments for best practices in internal controls. Inherent features of the debit card: According to TRANServe officials, ensuring that transit beneficiaries do not make non-transit-related purchases is an inherent feature in the design of the debit card TRANServe has implemented through Treasury and J.P. Morgan. The debit card is designed so that it can only be used to purchase transit fare media through transit providers that are identified through a limited list of MCCs approved by DOT. However, in situations when a merchant with an approved MCC is found to be allowing purchases for non-transit items on the debit cards or where a merchant repeatedly forces a card transaction when a purchase is declined, TRANServe has the option of working with J.P. Morgan to further restrict the debit card. This additional restriction—called a merchant identification (MID) block—involves blocking attempted transactions made by TRANServe debit cardholders at a specific merchant location. As a result, the MID block prevents all future transaction activity at that particular merchant even though it has an approved MCC. The MID block mechanism allows TRANServe to maintain the integrity of the MCC while selectively blocking noncompliant points of sale. TRANServe’s activities for continuous monitoring and evaluating the effectiveness of the internal control design were generally consistent with internal control standards. The Standards for Internal Control in the Federal Government states that internal control monitoring should assess the quality of performance over time and assure that ongoing monitoring occurs in the course of normal operations. The internal controls officer manages monitoring activities, which include maintenance and testing of internal controls. Consistent with federal internal control standards that call for ongoing and continual monitoring, TRANServe’s monitoring activities include debit card transaction data mining. According to SOPs, debit card transaction data mining includes monitoring debit card transactions on a weekly basis. Moreover, staff identify potential misuse by reviewing debit card transaction details for retail merchants, non-compliant MCCs, irregular transaction amounts, rejected transactions, and purchases of consumer items (i.e., non-transit-related items) under a compliant MCC. When misuse of a debit card is discovered, according to TRANServe, it will send a report and a letter to the agency of the violator notifying the agency of the potential violation. TRANServe officials said that once the agency receives notification of the violator, the client agency is responsible for taking appropriate action for those found to be violating program requirements—TRANServe has no contact with the violator. TRANServe will also, if necessary, contact J.P. Morgan to implement an MID block or to recoup payment. The data-mining process has three levels of review based on SOP. Data-Mining First Level Review: This level review of debit card transactions involves querying the MCCs. MCCs are routinely reviewed for compliance and any violations identified and followed up on with agency anomaly letters and chargebacks. The process for transmitting anomaly letters is a five-step process (see table 2). From fiscal years 2011 through 2014, TRANServe sent a total of 237 anomaly letters to agencies notifying them of potential misuse of the debit card (see table 3). The amounts of the questionable charges made by cardholders for this time period ranged from $1.10 to $1,557.00.According to TRANServe officials, the majority of the questionable charges was at or below the transit benefit’s statutory limit of $130 per month. In fiscal year 2014, J. P. Morgan processed over 1.5 million total purchase transactions for all TRANServe debit cards. In the same year, three charges exceeded the statutory limit. Table 3 shows the number of anomaly letters sent to agencies and the number of purchase transactions for fiscal years 2011 through 2014. Data Mining Second Level Review: This level of review involves querying the merchant name. This review involves performing key word searches. According to SOP, the key words in merchant names that will trigger an alert are parking, news, deli, cash, liquor, and coffee, among others. Data Mining Third Level Review: This level of review involves querying the transaction amount for irregular transaction, including those exceeding the statutory limit are identified, and contacting the merchant to determine the type of good or service purchased. When applicable, violations identified result in agency anomaly letters, MID blocks, and chargebacks. TRANServe provided several examples of its data mining of purchase transaction documentation that identified potential misuse of debit cards. For example, one debit card transaction was processed using a non- approved MCC. The participant used the debit card to make a purchase of $53 at a drugstore in April 2014. The internal controls officer notified the client agency of this potential misuse of the debit card and subsequently received confirmation of misuse from the client agency. TRANServe officials said that the internal controls officer typically notifies the client agencies of possible misuse within 5 to 10 days of receiving and completing the review of the data mining information. Additionally, for this transaction, TRANServe requested that J. P. Morgan reimburse the program for this amount, given that the merchant forced this transaction. TRANServe activities for collecting reliable information and providing timely communications to client agencies for relevant events were generally consistent with internal control standards. The Standards for Internal Control in the Federal Government states that for an entity to run and control its operations, it must have relevant, reliable, and timely communications relating to internal as well as external events. Information is needed throughout the agency to achieve all of its objectives. Information should be recorded and communicated to management and others within the entity who need the information and in a form and within a time frame that enables them to carry out their internal control and other responsibilities. Additionally, according to these federal internal-control standards, management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders since such information may have a significant impact on the agency achieving its goals. The following TRANServe activities align with the standard for information and communication: Transit benefit program partnership agreement: This agreement, which TRANServe has with its client agencies, states that TRANServe will maintain a database that will identify, among other things, the following: (1) all participants in the program that are currently deemed eligible by the client agency; (2) the original effective date of program participation; and (3) the value of fare media provided and the effective date of termination, as appropriate. The agreement also states that TRANServe will make other reports from its program database available upon agency request. However, as part of the partnership agreement, TRANServe states that it does not assume responsibility for ensuring client agencies’ internal controls over the program nor recipient integrity with regard to the program. It is the responsibility of the client agency to ensure that its employees are fully aware of their responsibilities for participation in the program. Sending anomaly letters: As previously described, TRANServe has established a process for sending debit-card anomaly letters to client agencies when consumer purchases are detected through the data mining process. Depending on the type of anomaly identified, an email with anomaly letter attached is sent to the agency for further agency action. The TRANServe program website: The website includes information about what client agencies need to do to prevent non-transit-related purchases, such as internal control best practices, as well as warnings to users about using debit cards for non-transit-related purchases. Additionally, TRANServe debit cards have a warning indicating that participants are legally bound to abide by the terms of the Transit Benefit Program and that use of the debit cards is personal certification that they will be used by cardholders as the transit benefit for their regular home-to-work transportation (see fig. 3). FMFIA annual assessments: DOT communicates its compliance with the FMFIA through the annual letters it sends to client agencies reporting that DOT’s system fully complies with federal and agency guidance. FMFIA requires agency managers to establish internal control systems that provide reasonable assurance regarding the agency’s proper use of funds and resources, compliance with statutes and regulations, and preparation of reliable financial reports. TRANServe worked with IRS to demonstrate that its debit-card program was in compliance with relevant statutes, Treasury regulations, and IRS administrative rules—specifically that the debit card qualified as a “transit pass” as defined in section 132(f)—for the purposes of qualifying as a transportation fringe benefit and being excludable from gross income. According to IRS, TRANServe demonstrated that the debit card was a “transit pass” because the card restrictions effectively permit recipients of the cards to use them only to purchase fare media on mass transit In May 2011, TRANServe first tested the use of a debit card systems. in the New York metropolitan area and based on the information from its preliminary testing, obtained a letter from IRS concluding that for the New York metropolitan area the TRANServe debit card, subject to any changes, was a “transit pass” for purposes of section 132(f) of the Code and as such was a qualified transportation fringe benefit. IRS’s conclusion was based on the fact that TRANServe had demonstrated that the debit card restrictions as tested (i.e., specifically the MCC restriction with MID blocking capability) effectively permit cardholders to use the debit cards to only purchase fare media on mass transit systems. In addition, IRS took into consideration TRANServe’s assurance that it would complete monthly reviews of employees’ TRANServe debit card accounts (i.e., anomaly monitoring) in order to identify transactions that might involve non-transit-related purchases and other anomalies. IRS further concluded that the debit card also would constitute a bona fide cash reimbursement program (with respect to systems or areas where no transit pass is readily available) for purposes of section 132(f) because the program contained the features described in Revenue Ruling 2006-57 (e.g., initial payment of transit fare with after-tax amounts for at least the first month, annual employee recertification that the debit card was used only to purchase transit fare media, among other things). Based on its experience in the New York metropolitan area, TRANServe then developed a plan to field test the debit cards in the eight service areas—geographic divisions that contain proximate states—where TRANServe had previously distributed transit paper vouchers. From 2011 to 2013, TRANServe implemented its field test, which included: researching the transit usage in the region, identifying target areas where the transit authorities are located, selecting point-of-sale locations where transit media are sold and as well as non-transit-related sales locations, distributing debit cards that already contained the MCC restrictions to testers, sending testers to the predetermined sales locations to purchase either transit fare media or non-transit-related items, assigning some testers to make debit-card purchases on-line or via telephone depending on the number of ways transit media were sold, and contacting J. P. Morgan to obtain transaction records during the field testing phase. Figure 4 shows how TRANServe implemented its field tests of the debit cards in each of the eight service areas. According to TRANServe officials, TRANServe staff reviewed the test results for each service area to determine whether the debit card restrictions were effective. The testers compiled information about their purchases and obtained transaction reports from J. P. Morgan. TRANServe reviewed this information in order to verify that the debit card restriction held and that the card was used only for authorized purchases. In some instances, TRANServe subsequently worked with J. P. Morgan to implement MID blocks. In other situations, TRANServe worked with the respective transit authorities to ensure proper usage of the debit card. Following each of the field tests, TRANServe shared the results with IRS and obtained IRS’s comments or questions about the tests and results. Once IRS was satisfied with the final results in a service area, IRS sent TRANServe an email correspondence to confirm its understanding of the test results and that based on such test results, the debit card constitutes a transit pass and qualifies as a transportation fringe benefit. TRANServe substantially completed the roll out of the debit-card program by the end of fiscal year 2014. In each service area, TRANServe completed a number of debit card transactions, to test whether the debit card was sufficiently restricted. Service area 1: TRANServe began field tests in the area between July and September 2011. Testers completed 103 point-of-sale tests, of which 87 of the transactions passed (i.e., the card restrictions held so that it could only be used to purchase transit media), 10 failed, and 6 were not completed for reasons such as the merchant did not have the item in stock. Seven of the 10 failed transactions resulted from one merchant’s overriding declined payments, and the remaining 3 purchases were at parking garages that used an accepted MCC. According to TRANServe officials, TRANServe worked with its financial agent to stop this merchant from overriding transactions and planned to use anomaly testing to further detect parking garage transactions. In November 2011, based on the test results, IRS officials confirmed to TRANServe that based on the test results the debit card constitutes a transit pass in the Norfolk and Baltimore metropolitan regions. In the National Capital Region, the debit card satisfied the requirements for a bona fide cash reimbursement program for purposes of transit systems that do not accept the local smart card (i.e., Washington Metropolitan Area Transit Authority SmarTrip® card), which is a transit pass. Service area 2: TRANServe began field tests in the area between July and September 2011. Testers completed 130 point-of-sale tests, of which 122 of the transactions passed and 8 transactions failed. The failed transactions resulted from merchants’ overriding declined payments and approving transactions at certain parking garages. TRANServe indicated it would work with its financial agent to stop the merchants from overriding transactions and it would continue to monitor the parking garage activities through anomaly testing. In January 2012, IRS officials confirmed that, based on the test results, the debit card constitutes a transit pass in the service area. Service area 3: TRANServe began field tests in the area between January and March 2012, but excluded one potential target area, Milwaukee, because research on transit in the city indicated too few locations to purchase fare media with credit or debit cards. Testers completed 175 point-of-sale tests, of which 174 of the transactions passed and one transaction failed. This transaction involved the tester’s making a purchase of a non-transit-related item at a transit store location that offered consumer merchandise. According to TRANServe, it worked with the transit authority to change its procedures so that only transit fare media can be purchased with the debit card. In March 2012, IRS officials confirmed that, based on the test results, the debit card constitutes a transit pass in the service area. Service area 4: TRANServe began field tests in the area between July and September 2012. Testers completed 80 point-of-sale tests, of which all of the transactions passed. In May 2013, IRS officials confirmed that, based on the test results, the debit card constitutes a transit pass in certain parts of the service area (specifically, in Portland and in Seattle—although the pass is limited to use at national van pool companies in Seattle). Service area 5: TRANServe began field tests in the area between April and June 2012. Testers completed 52 point-of-sale tests, of which all of the transactions passed. In December 2012, IRS officials confirmed that, based on the test results, the debit card constitutes a transit pass in certain parts of the service area (specifically, Boston and Newark). At that time, IRS was still evaluating TRANServe information provided for Buffalo, Philadelphia, and Pittsburgh. TRANServe subsequently completed additional tests in these locations. The tests demonstrated the effectiveness of the card restrictions, and IRS officials agreed later in December 2012 and March 2013 that, based on the test results, the debit card qualified as a transit pass in these locations. Service area 6: TRANServe began field tests in the area between January and March 2012. Testers completed 151 point-of-sale tests, of which 149 transactions passed and 2 failed. These transactions involved the purchase of parking passes through a transit authority. TRANServe did not roll out the debit card in this segment of the service area because it could not resolve the co-mingling of transit and parking purchases. In August 2012, IRS officials confirmed that, based on the test results, the debit card constitutes a transit pass in certain parts of the service area (specifically, Los Angeles, El Segundo, San Jose, San Diego, San Francisco, and Oakland). Service area 7: TRANServe began field tests in the area between April and June 2012. Testers completed 84 point-of-sale tests, of which 82 transactions passed and 2 failed because the tester was able to purchase non-transit-related items at a transit authority store and bike rental shop. TRANServe worked with J.P. Morgan to block purchases at those locations. In December 2012, IRS officials confirmed that, based on the test results, the debit card constituted a transit pass in certain parts of the service area (specifically, Salt Lake City, Ogden, Albuquerque, Denver, and Phoenix). At that time, IRS was still evaluating DOT’s information provided for Honolulu. Following additional points of sale tests by TRANServe, IRS confirmed that, based on the test results, the debit card constituted a transit pass in Honolulu for van pool and bus service. Service area 8: TRANServe began field tests in the area between April and June 2012. Testers completed 79 point-of-sale tests, of which all of the transactions passed. In May 2013, IRS officials confirmed that, based on the test results, the debit card constitutes a transit pass in certain part of the service area, specifically Dallas, Houston, San Antonio, St. Louis, and Kansas City. We provided a draft of this report to the Department of Transportation and Internal Revenue Service for review and comment. DOT and IRS provided technical comments, which we incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees and the Secretary of the Department of Transportation and the Commissioner of the Internal Revenue Service. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions or would like to discuss this work, please contact me at (202) 512-2834 or wised@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this report are listed in appendix I. In addition to the contact named above, Vashun Cole (Assistant Director); Darryl Chang; Dwayne Curry; Paul Kinney; Gail Marnik; SaraAnn Moessbauer; Susan E. Murphy; Cheryl Peterson; Neil A. Pinney; and Amy Rosewarne made key contributions to this report.
In 1992, Congress created a transportation fringe benefit that allowed public and private employers to offer employees transit benefits, excludable from gross income, to cover out-of-pocket public transportation commuting costs. Federal agencies may distribute these transit benefits directly or enter into an agreement with another agency, such as DOT, to distribute the benefits on a fee-for-service basis. In 2011, DOT's TRANServe began using debit cards to distribute transit benefits. IRS has established rules to help employers ensure their debit card programs qualify as allowable fringe benefits. Members of Congress have questioned whether the debit card restrictions prevent non-transit-related purchases and whether DOT's program complied with IRS rules. This report describes the extent to which DOT has (1) designed internal controls to provide reasonable assurance that employees do not use the debit card to make non-transit-related purchases and (2) worked with IRS to ensure its debit card program complies with IRS's rules. GAO reviewed the design of TRANServe's internal control system for preventing non-transit purchases, but testing the system was not within the scope of the work; compared federal standards and TRANServe's practices; reviewed IRS rules on fringe benefits; and obtained TRANServe documentation of the steps taken to demonstrate that its debit card complied with the rules. GAO is not making recommendations in this report. DOT and IRS provided technical comments that were incorporated as appropriate. The Department of Transportation's (DOT) Office of Transportation Services (TRANServe) has included multiple internal control activities in the design of the TRANServe debit card program. These controls are intended to prevent federal employees from using their debit card for non-transit-related purchases, and as designed, would be expected to provide reasonable assurance that non-transit-related purchases can be identified and denied. The phrase “internal control” does not refer to a single event, but rather a series of actions that occur throughout an entity's operations on an ongoing basis for safeguarding assets and preventing and detecting errors and fraud. DOT provided evidence that the design of its TRANServe debit card program aligns with each of the five internal control standards as identified in GAO's Standards for Internal Control in the Federal Government and as described below. Control environment : DOT has established a control environment framework for the TRANServe debit card program by, among other things, setting the program's primary goal as enhancing internal controls to deter waste, fraud, and abuse of transit benefits. Risk assessment : DOT established the position of internal controls officer, in 2007, to examine control activities and identify potential program vulnerabilities through the testing of debit card controls. Control activities : TRANServe has established mechanisms for controlling the use of the debit card. For example, the debit card is restricted so it can only be used to purchase transit fare from transit providers that are identified by merchant category codes that have been approved by DOT. The codes are used to classify a business by type of goods or services it provides. Monitoring : TRANServe conducts weekly data mining, which includes reviewing debit card transactions to identify potential misuse and irregularities. Information and communication . TRANServe sends “anomaly letters” (letters detailing potential misuse of the debit card) to agencies when non-transit purchases are detected. TRANServe worked with the Internal Revenue Service (IRS) to demonstrate that the debit card program is in compliance with IRS's rules for qualified transportation fringe benefits and that in particular, it was a transit pass and effectively prevented non-transit-related purchases. From 2011 to 2013, TRANServe staff tested the debit card with transit agencies in eight areas across the country, making dozens of purchases of both transit-related and consumer-related products. In most cases the purchase restriction succeeded in preventing the debit card from purchasing non-transit-related products. In the few cases where the restriction failed, TRANServe took steps to have additional restrictions placed on the debit cards. Once it completed the tests in each region, TRANServe sent the test results to IRS, and once IRS was satisfied with the final results, IRS officials sent DOT an e-mail confirming that the debit card qualified as a transportation fringe benefit in that area. TRANServe then completed the roll out of the debit card program by the end of fiscal year 2014.
NASA and its international partners—Canada, Europe, Japan, and Russia—are building the space station to serve as an orbiting research facility. The space shuttle is the primary vehicle supporting the assembly and resupply of the station. Figure 1 shows the Space Shuttle Endeavour docked to the International Space Station. Following the Columbia accident in February 2003, the NASA Administrator grounded the space shuttle fleet pending an investigation into the cause of the accident. The administrator appointed the Columbia Accident Investigation Board to determine the cause of the accident and to make recommendations for improving the safety of the space shuttle before it could return to flight. The board issued its report in August 2003 with 29 recommendations for improvement—15 of which must be implemented before the space shuttle can return to flight. NASA plans to return the shuttle to flight in July 2005. While the shuttle has been grounded, space station crew transfers and logistics resupply have depended on Russian Soyuz and Progress vehicles. Europe and Japan are also developing logistics cargo vehicles to support space station operations later this decade. These Russian, European, and Japanese vehicles are launched on expendable rockets. The European Automated Transfer Vehicle (ATV), scheduled to be available for missions to the space station in 2006, is being designed to rendezvous and dock with the space station’s Russian Service Module. The Japanese H-II Transfer Vehicle (HTV) is scheduled to be available in 2008 and will fly within the proximity of the space station to be caught by the station’s robotic arm before being berthed to the space station. The ATV and HTV also carry less cargo than the shuttle. Because the Russian Soyuz and Progress are the only vehicles currently available and carry significantly less payload than the space shuttle, operations are generally limited to transporting crew, food, potable water, as well as propellant resupply for reboosting the space station to higher orbits. Launches of space station assembly elements and large orbital replacement items for maintenance have effectively ceased. From 2000 to early 2004, NASA performed two studies that focused on the potential use of commercial launch vehicles to provide logistics services to the space station. In a 90-day study conducted in 2000, NASA determined that no commercial logistics service for the space station was possible at that time, as no launch vehicles possessed the critical capabilities necessary to provide logistics services, including automated rendezvous capabilities. As a result of this study, NASA decided to solicit and fund a more detailed review of concepts designed to provide logistics services to the space station. The Alternate Access to Station (AAS) study contracts were awarded in July 2002, with 1-year contracts given to four contractors. In summer 2003, these contractors presented architectures that relied on existing domestic or international expendable launch vehicles. In the fall of 2003, the contracts were extended, and the contractors were asked to address larger cargo delivery capabilities and “downmass” (e.g. returning research materials to earth) requirements were added for the return of cargo. This study ended in January 2004 with the contractors briefing on their study results, at which time NASA concluded that developing a domestic capability to meet most of the space station cargo service needs was possible within 3 to 5 years. In January 2004, the President announced a new Vision for Exploration that called for retiring the shuttle in 2010, requiring NASA to find an alternative to support space station operations through 2016 by the end of the decade. The President called for a shift in NASA’s long-term focus, envisioning that NASA will retire the space shuttle after nearly 30 years of service as soon as assembly of the International Space Station is completed, planned for the end of the decade, and will develop a new crew exploration vehicle as well as launch human missions to the moon between 2015 and 2020. In essence, NASA’s implementation plan holds aeronautics, science, and other activities at near constant levels and transitions funding currently dedicated to the space station and space shuttle programs to the new exploration strategy as the space station and space shuttle programs phase out. The vision also changed the space station’s on-board research focus. Originally, the space station was to be used for conducting experiments in near-zero gravity to include life sciences research on how humans adapt to long durations in space, biomedical research, and materials-processing research. Under the new vision, the research will be focused on determining the effects of long duration space travel on humans and developing countermeasures for those effects, with the goal that the space station research necessary to support human explorers on other worlds would be complete by 2016. Figure 2 shows NASA’s proposed plan for operational support of the space station until 2016. According to program officials, NASA’s 2004 informal assessment concluded that alternative launch vehicles would present operational risks, technical challenges, and long program delays and would cost more than returning the space shuttle to flight, making the space shuttle the best option for both assembly and logistics missions through the end of the decade. According to previous studies and our discussions with commercial industry representatives, the time involved for developing an alternate capability would probably preclude assembly missions from consideration. However, NASA did not have sufficient knowledge to support its conclusion regarding logistic support missions. Specifically, NASA did not perform a comparative cost analysis that considered the schedule impacts or associated costs of planned space shuttle operations. Furthermore, NASA officials did not document these informal proceedings and decisions reached; therefore, the thoroughness of any assessment of alternatives cannot be verified, nor can their conclusions be validated. NASA is currently evaluating responses from commercial industry on different ways to provide logistics services to and from the space station. NASA’s re-examination of its requirements for the space station and space shuttle, coupled with the cost information of alternatives obtained from commercial industry responses, provide NASA with a basis for performing a detailed analysis of alternatives to determine if any planned space shuttle logistics missions could be performed by or complemented with commercial launch vehicles later this decade. As a result of the informal assessment, NASA outlined a number of technical challenges to using an alternate vehicle for space station support, especially for assembly missions where the space shuttle’s crew and remote manipulator arm perform key functions. Appendix III provides a discussion of these challenges. NASA officials stated they used the AAS study, which concentrated solely on logistics support missions, as the foundation for its 2004 informal assessment. In a summary of that study, NASA reported that the AAS contractors projected the cost to develop an alternate launch capability would be approximately $1 billion, take 3 to 5 years to develop, and require $2 to $3 billion per year for operations. We held discussions with commercial industry representatives who concurred with this time frame to develop an alternate capability to support space station operation. Since a majority of the space station assembly missions are scheduled within the next 3 years, these types of missions could preclude the use of an alternative vehicle. However, NASA did not have sufficient knowledge to conclude that the shuttle was the best option for logistics missions prior to its retirement of the shuttle in 2010. NASA officials stated that the technical challenges for developing an alternative vehicle could be overcome, but probably not before the 28 missions scheduled through 2010, of which 8 are for logistics, including 5 of the last 7 missions. However, we found no evidence of analyses performed by NASA to compare the cost and schedule impact of using alternate launch systems with the scheduled space shuttle program costs, to include the cost of returning the space shuttle to flight. We recently reported that the majority of NASA’s budget estimates for returning the space shuttle to flight had not been fully developed. In fact, NASA officials stated that they did not compare estimated costs for developing alternative launch vehicles against budget estimates for the 28 space shuttle flights currently planned to support the space station, which total more than $22 billion between fiscal year 2005 and fiscal year 2010. In addition, NASA has also requested $1.8 billion for crew and cargo services over the same time frame to purchase commercial services using existing and emerging capabilities, both domestic and foreign. In its fiscal year 2006 budget request, NASA indicated that such commercial services are expected to be available not later than 2009 and that these services are a key element in the future of the space station program. In addition to lacking sufficient knowledge with regard to the use of alternatives for logistics missions, NASA did not document the proceedings and decisions reached in its 2004 assessment. Specifically, the agency did not record the processes it followed and therefore did not capture the basis of the decisions reached. When asked about the details of the assessment, NASA officials indicated that the informal assessment was based primarily on the expertise within the headquarters and they did not formally document the decision paths. While we recognize that the extensive experience of its senior managers is an important element in evaluating alternatives, the existence of any formal assessment of alternatives covering the entire range of missions for space station support cannot be verified, and the agency’s position on the space shuttle being the best option cannot be validated. NASA received 26 responses from a September 2004 request for information that asked for, among other things, input from the commercial space industry regarding capabilities and market interest for missions for providing cargo launch services to, and the ability to return items from, the space station. This request for information had similar characteristics as the AAS study, which also had as its objective to explore the development of alternative cargo “upmass” and “downmass” support for the space station. The responses are being evaluated, and NASA plans to seek more detailed information from the commercial launch industry for additional study or development work in June 2005. According to NASA officials, the responses from industry with regard to space station logistics support have been very promising. The officials indicated that it might be possible to have a developed and certified capability to provide commercial cargo launch service to the space station prior to space shuttle retirement late this decade, rather than only after its retirement. However, we were told these services would not eliminate any of the scheduled space shuttle flights, but only augment the capabilities of the space shuttle. While these responses are being evaluated and knowledge is being gathered, NASA is also reviewing the space station research requirements and re-examining the planned manifest for the 28 space shuttle flights in an attempt to better align their missions to the Vision for Space Exploration. According to NASA’s fiscal year 2006 budget submission, the agency is examining configurations of the space station that meet the needs of the new vision and the international partners with as few space shuttle flights as necessary. Combining the information gathered from commercial industry and a better definition of space station requirements, NASA officials agreed there is an opportunity to perform a more comprehensive assessment of alternatives, especially for the logistics missions late this decade. According to a recent revision of NASA’s internal guidance, the most important aspect of formulating a program technical approach is conducting a thorough analysis of alternatives. NASA guidance defines an analysis of alternatives as a formal method that compares alternatives by estimating their ability to satisfy mission requirements through an effectiveness analysis and by estimating their life cycle costs through cost analysis. The results of these two analysis are used together to produce a cost-effectiveness comparison that allows decision-makers to assess cost and effectiveness simultaneously. An analysis of alternatives broadly examines multiple elements of program alternatives (including technical performance, risk, life cycle cost, and programmatic aspects), and is typically an important part of the formulation studies. NASA views a thorough analysis of alternatives as an important aspect in the formulation of a program technical approach. While we recognize that the extensive experience of its senior managers is an important element in evaluating alternatives, NASA did not have the full breadth of knowledge necessary to perform a comprehensive assessment of alternative launch vehicles to enable it to conclude the space shuttle was the best option to support space station operations. However, NASA’s recent request for information from industry offers the agency an opportunity to enhance its knowledge of alternatives to the space shuttle for providing logistics support for the space station and to explore the use of alternatives to the existing space shuttle manifest currently under review. Although alternate vehicles would not be available for missions to the space station until later this decade and difficult to use for assembly missions, several of the space shuttle’s final flights are planned logistics support missions that might be conducted using alternative launch vehicles. By completing a comprehensive analysis, NASA could also identify the feasibility and risks associated with an alternative means of providing logistics support to the space station in case delays occur requiring extension of the planned 2010 date. Furthermore, a comprehensive and thoroughly documented analysis of launch requirements and launch alternatives can provide NASA with comparative cost information and afford the agency the opportunity to use its resources more effectively and efficiently. This is particularly important now since the space station and space shuttle programs will be competing for limited resources. To better position the agency to determine the best available option for providing logistics support to the space station, we recommend the NASA Administrator take the following three steps: Direct current efforts to explore other space launch options to utilize a comprehensive and fully documented assessment of alternatives that matches mission requirements, and associated manifest, with the launch vehicles expected to be available; As part of this assessment, (a) determine the development and operation costs associated with these potential alternatives and (b) perform a detailed analysis of these alternatives to determine the best option for delivering the logistics cargo required for space station operations prior to and after space shuttle retirement; and Ensure this assessment is completed before any NASA investments are made for commercial space transportation services to the space station. In written comments on a draft of this report, NASA concurred with our recommendations and stated that the agency seeks to fully explore space launch options for assuring access to the space station in conjunction with its retirement planning for the space shuttle. NASA plans to document its acquisition strategy through a NASA Headquarters Acquisition Strategy Meeting prior to release of a request for proposal for commercial space station cargo services later this summer. In addition, NASA said it will evaluate the cost and capabilities of the proposed transportation system to meet space station and agency needs, as well as the needs of its partners. NASA also said that its acquisition strategy will be consistent with space station requirements, international partner agreements, and available funding. We are encouraged that NASA has taken steps to pursue a deliberate alternative cargo transportation system assessment. However, NASA should not limit documentation of this effort to the acquisition strategy meeting, but should also document the decision paths leading up to that event and throughout the evaluation of the transportation systems proposed by contractors responding to NASA’s request for proposals. This approach should identify the decision makers involved and provide a fully documented rationale of the acquisition processes as NASA analyzes all alternatives to determine the best options for delivering the logistics cargo for space station operations. NASA’s comments are reprinted in appendix II. As agreed, unless you publicly announce the contents earlier, we plan no further distribution of this report until 15 days from its issue date. At that time, we will send copies to the NASA Administrator; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4841 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix IV. To determine whether NASA conducted a detailed assessment of alternatives to the space shuttle for completing assembly and providing logistics support to the International Space Station, we: Obtained and analyzed pertinent NASA documents and briefing slides related to the International Space Station, space shuttles, and other launch alternatives, such as Expendable Launch Vehicles, including: European Space Agency Segment Specifications for the Automated Transfer Vehicle; Specification for the Japanese H-II Transfer Vehicle; International Space Station Payload Integration and Assembly Sequence specifications; Evolved Expendable Launch Vehicle configurations; space station and space shuttle status, history, and cost briefings; Return to Flight Status Briefings; and, Alternate Access to Station briefings. Reviewed previous GAO reports on NASA, the Space Shuttle Program, International Space Station Program, and best practices in many areas and multiple agencies. We also reviewed reports from the Congressional Budget Office, Congressional Research Service, Office of Management and Budget, and the Planetary Society, and Russian Space Program. Interviewed officials responsible for managing the programs within the Space Operations Mission Directorate at NASA headquarters, as well as program managers at Johnson Space Center, Texas. We also interviewed NASA officials at Kennedy Space Center, Florida, who are responsible for processing space station payloads and integrating those payloads with the launch vehicles. We interviewed contractors at Boeing Launch Services and Lockheed Martin Space Systems and reviewed pertinent documentation related to expendable launch vehicles for space station assembly and logistics support. We also reviewed NASA’s request for information related to commercial industry interest in providing that capability and NASA’s plans for assessing responses to the request for information and follow on activities. For this, we interviewed NASA officials within the Space Operations Mission Directorate and at Johnson Space Center, Texas. We also interviewed Air Force officials from the Evolved Expendable Launch Vehicle Program. We also received, reviewed, and analyzed follow-up written and oral comments from several individuals at these locations and NASA’s Science Directorate. To accomplish our work, we visited and interviewed officials at NASA Headquarters, Washington, D.C.; Johnson Space Center, Texas; and Kennedy Space Center, Florida. These centers were chosen because they maintain primary responsibility for conducting space shuttle and space station operations on a day-to-day basis. The offices we met with at headquarters and each of these centers included space station program officials, space shuttle program officials, NASA Launch Services Office, the International Space Station Payload Processing Directorate at Kennedy Space Center, and Space Shuttle Program Integration Office at Kennedy Space Center. We also visited the Boeing Launch Services, Inc., in Huntington Beach, California, and Cape Canaveral Air Force Station, Florida; Boeing Commercial Space Systems in Research Park, Huntsville, Alabama; and Lockheed Martin Space Systems Company in Littleton, Colorado, and Cape Canaveral Air Force Station, Florida. We conducted our work from August 2004 through April 2005 in accordance with generally accepted government auditing standards. According to NASA officials involved in the 2004 assessment, accommodating a transition to other launch vehicles would create significant challenges that drive risk, schedule, and costs. NASA officials stated the space station elements were designed and built to take advantage of the more benign launch environment in the space shuttle’s cargo bay, to be removed and repositioned by the space shuttle’s robotic arm, and then connected together by the space shuttle crew during space walk activities. The following outlines the major challenges NASA identified: There would be a need to develop a new process to assemble the space station using only the space station crew and without the benefit of the space shuttle remote manipulator arm. Using another launch vehicle would require the redesign and retesting of space station elements already built due to the change in launch environment. NASA officials stated the space shuttle launch environment, with respect to vibration and g-force exerted on the payload, cannot be duplicated on an expendable launch vehicle. A new, unique transfer vehicle would need to be developed in order to rendezvous and dock assembly elements with the space station. For logistics cargo support, two transfer vehicles are currently being developed for logistics mission to support space station operations, the European Automated Transfer Vehicle (ATV) and the Japanese H-II Transfer Vehicle (HTV). These vehicles, much like the Russian Progress vehicle, have a limited cargo capability when compared to the space shuttle. The ATV, scheduled to be available for missions to the space station in 2006, is being designed to rendezvous and dock to the space station via the Russian Service Module. The HTV is scheduled to be available in 2008 and will fly within the proximity of the space station to be caught by the space station’s robotic arm before being berthed to the space station. A carrier to go inside the new transfer vehicle to replicate the space shuttle attach points would need to be developed. According to these officials, in order to meet volume requirements, the payload fairings would have to be modified from the current 5-meter to a 6-meter version to accommodate the larger diameter payloads to the space station during assembly missions. Staff making key contributions to this report were Jim Morrison, James Beard, Rick Cederholm, Karen Sloan, and T.J Thomson.
The National Aeronautics and Space Administration's (NASA) space shuttle fleet has been key to International Space Station operations. Since the grounding of the fleet in February 2003, Russia has provided logistics support. However, due to the limited payload capacity of the Russian space vehicles, on-orbit assembly of the space station stopped. In May 2004 and in February 2005, NASA testified before the Congress that it had assessed using alternative launch vehicles to the space shuttle for space station operations. NASA concluded that using alternatives would be challenging and result in long program delays and would ultimately cost more than returning the space shuttle safely to flight. Yet uncertainties remain about when the space shuttle will return to flight, and questions have been raised about NASA's assessment of alternatives. GAO was asked to determine whether NASA's assessment was sufficient to conclude that the space shuttle is the best option for assembling and providing logistics support to the space station. NASA's 2004 assessment identified significant challenges associated with using alternative launch vehicles for space station assembly and operation. According to previous studies and our discussions with industry representatives, these challenges would likely preclude using alternative vehicles for assembly missions. However, NASA's assessment was insufficient to conclude that the shuttle was the best option for logistics support missions prior to the proposed retirement of the space shuttle in 2010. NASA relied primarily on headquarters expertise to conduct the informal assessment, and while we recognize that the extensive experience of its senior managers is an important element in evaluating alternatives, NASA officials did not document the proceedings and decisions reached in its assessment. As a result, the existence of this assessment of alternatives cannot be verified, nor can the conclusions be validated. NASA is currently evaluating responses from a September 2004 request for information from various commercial space transportation industries that could provide launch services to support space station operations, following retirement of the shuttle in 2010, until the station's planned retirement in 2016. NASA officials indicated that a commercial launch capability to support space station operations is possible prior to the proposed shuttle retirement in 2010, but stated that this capability would not eliminate any of the scheduled space shuttle flights. NASA is also re-examining its requirements for the type of scientific research to be conducted on the space station as well as the manifest requirements of the space shuttle. Combining the information gathered from commercial industry and a better definition of space station and shuttle requirements, NASA officials agree there is an opportunity to perform a more comprehensive assessment of alternatives, especially for logistics missions late this decade.
The commercial motor carrier industry represents a range of businesses, including private and for-hire freight transportation, passenger carriers, and specialized transporters of hazardous materials. As of 2012, FMCSA estimates that there were more than 531,000 active motor carriers, a number that fluctuates over time due to the approximately 75,000 new applications that enter the industry each year combined with thousands of carriers annually leaving the market. Among carriers we assessed for this report, most that operate in the United States are small firms; 93 percent of carriers own or operate 20 or fewer motor vehicles. Nonetheless, a large percentage of vehicles on the road are operated by large carriers. Approximately 270 carriers have more than 1,000 vehicles each and account for about 29 percent of all vehicles that FMCSA oversees. FMCSA is responsible for overseeing this large and diverse industry. FMCSA establishes safety standards for interstate motor carriers as well as intrastate hazardous material carriers operating in the United States. To enforce compliance with these standards, FMCSA partners with state agencies to perform roadside inspections of vehicles and investigations of carriers. In fiscal year 2012, FMCSA had a budget of approximately $550 million and more than 1,000 FMCSA staff members located at headquarters, four regional service centers, and 52 division offices. In 2008, FMCSA launched an operational model test of CSA in four states and began implementing the CSA program nationwide in 2010. CSA is intended to improve safety beyond the prior SafeStat program by identifying safety deficiencies through better use of roadside inspection data, assessing the safety fitness of more motor carriers and drivers, and using less resource-intensive interventions to improve investigative and enforcement actions. From fiscal year 2007 through fiscal year 2013, FMCSA obligated $59 million to its CSA program, including CSA development and technical support, information technology upgrades, and training. For fiscal year 2014, FMCSA requested $7.5 million for CSA. CSA has three main components: Safety Measurement System. SMS uses data obtained from federal or state roadside inspections and from crash investigations to identify the highest risk carriers. SMS was designed to improve on SafeStat by incorporating all of the safety-related violations recorded during roadside inspections. Carriers potentially receive an SMS score in seven categories based on this information. Intervention. A set of enforcement tools, such as warning letters, additional investigations, or fines are used to encourage the highest risk carriers to correct safety deficiencies, or place carriers out-of- service. Safety Fitness Determination Rule. This future rulemaking will amend regulations to allow a determination—based in part on some of the same information used to calculate SMS—as to whether a motor carrier is fit to operate on the nation’s roads. SMS, the measurement system component of CSA, uses the data collected from roadside inspections and crash reports to quantify a carrier’s safety performance relative to other carriers. Specific carrier violations recorded during roadside inspections are assigned to one of six Behavioral Analysis and Safety Improvement Categories (BASIC). According to FMCSA, these BASICs were developed under the premise that motor carrier crashes can be traced to the behavior of motor carriers and their drivers. A seventh category, called the Crash Indicator, measures a carrier’s crash involvement history (see table 1). Each SMS score is designed to be a quantitative determination of a carrier’s safety performance. For each of the approximately 800 violations that fall under the various BASICs, FMCSA assigns a severity weight that is meant to reflect the violation’s association with crash occurrence and crash consequence when compared with other violations within the same BASIC. For example, reckless driving violations, categorized in the Unsafe Driving BASIC, are assigned a severity weight of 10 out of a possible 10 because FMCSA determined that these violations have a stronger relationship to safety risk than some other types of violations. Unlawfully parking, by comparison, is also categorized in the Unsafe Driving BASIC, but is assigned a severity weight of 1 out of 10. FMCSA calculates SMS scores for carriers every month through a process that has three main steps, each of which is made up of several calculations. Relevant inspections are either a driver inspection, in which the inspection focuses on driver-related requirements, such as the driver’s record of duty or medical certificate, or a vehicle inspection, which focuses on the condition of the motor vehicle. Driver inspections are the relevant inspection for the Unsafe Driving, Hours-of-Service Compliance, Driver Fitness, and Controlled Substances and Alcohol BASICs. Vehicle inspections are considered relevant inspections for the Vehicle Maintenance BASIC. For the Hazardous Materials BASIC, carriers that transport placardable quantities of hazardous materials are also subject to vehicle inspections as the relevant inspections. Throughout the report, we will refer to relevant inspections as simply inspections. another calculation—the number of vehicles a carrier operates adjusted by the number of vehicle miles. FMCSA accounts for exposure in order to make the scores comparable across carriers. This approach has tradeoffs; while carriers can be compared without penalizing some for having had more inspections or road activity, exposure itself can be considered an element of risk. All else being equal, carriers with more road activity are involved in more crashes and potentially pose more risk to safety. Step 2: Data sufficiency. Depending on the BASIC, carriers generally receive SMS scores if they meet minimum thresholds of exposure (i.e., number of vehicles or inspections), or a minimum number of inspections with violations (i.e., “critical mass”). For purposes of display on FMCSA’s public website and identifying the highest risk carriers for directing enforcement resources, FMCSA does not include scores for carriers that do not meet a so-called critical mass of violations. For each BASIC, this typically requires a minimum number of inspections that include violations in that BASIC, a violation in that BASIC in the last 12 months, and, for some BASICs, a violation during the most recent inspection. Step 3: Dividing carriers into peer groups. After calculating violation rates, FMCSA assigns carriers it determines have sufficient exposure to peer groups with similar levels of on-road activity, or what the agency refers to as safety event groups. According to FMCSA, safety event groups are designed to account for the inherent greater variability in violation rates based on limited levels of exposure and the stronger level of confidence in violation rates based on carriers with higher exposure. FMCSA assigns carriers to safety event groups based on their number of inspections, the number of inspections with violations, or crashes the carriers have accrued in the previous 2 years. Within each safety event group, FMCSA calculates SMS scores by ranking carriers’ violation rates (obtained in step 1 above) and assigning each carrier a percentile score ranging from 0 to 100, where 100 indicates the highest violation rate and the highest estimated risk for future crashes. FMCSA displays scores for five of the BASICs on its public website. Once SMS scores are calculated, FMCSA begins a Safety Evaluation that uses SMS scores to identify carriers with safety performance problems requiring intervention. FMCSA has defined a fixed percentage threshold for each BASIC that identifies those carriers that pose the greatest safety risk. (For example, the threshold for the Unsafe Driving BASIC is 65 for most carriers.) These carriers are then subject to one or more FMCSA actions from a suite of intervention tools that were expanded as part of CSA. Tools such as warning letters and on- and off-site investigations allow FMCSA and state investigators to focus on specific safety behaviors. FMCSA can also use enforcement strategies such as fines or placing a carrier out-of-service. The range of available enforcement options gives FMCSA investigators flexibility to apply interventions commensurate with a carrier’s safety performance (see table 2). Seven of the nine interventions are currently implemented nationwide. Prior to CSA, FMCSA investigators’ only tool was a labor intensive, comprehensive on-site investigation. With the additional set of interventions, FMCSA aims to reach more carriers with its existing resources. According to FMCSA and state safety officials, an investigation or other intervention can also be initiated based on the results of a crash investigation, a complaint against a carrier, or a consistent pattern of unsafe behavior by a carrier. FMCSA further designates some carriers that exceed multiple BASIC thresholds as “high risk.” According to FMCSA, many of these carriers are assigned a Safety Investigator, who must complete a comprehensive review within a year regardless of any changes in the carrier’s score. A carrier is considered high risk if it either: has an SMS score of 85 or higher in the Unsafe Driving BASIC or Hours-of-Service Compliance BASIC or the Crash Indicator, and one other BASIC at or above the intervention threshold, or exceeds the intervention threshold for any four or more BASICs. Currently, FMCSA can only declare a carrier as unfit to operate upon a final unsatisfactory rating following an on-site inspection. In addition, FMCSA can order a carrier to cease interstate operations if it determines that the carrier is an imminent hazard. FMCSA can make this determination for several reasons including: FMCSA determining the carrier to be an imminent hazard. receiving an “unsatisfactory” safety rating during an on-site comprehensive investigation and failing to improve the rating within 45 or 60 days; failing to pay a fine after 90 days; failing to meet the standards required for a New Entrant Audit; or According to FMCSA, during fiscal year 2012, the agency issued 855 out- of-service orders due to an unsatisfactory rating, 1,557 for failing to pay a fine, and 47 because a carrier was determined to be an imminent hazard. FMCSA has indicated its plans to propose using the same performance data that inform SMS scores to determine whether a carrier is fit to continue to operate. According to FMCSA, the Safety Fitness Determination rulemaking would seek to allow FMCSA to determine if a motor carrier is not fit to operate based on a carrier’s performance in five of the BASICs, an investigation, or a combination of roadside and investigative information. FMCSA proposes doing this through a public rulemaking process; it currently estimates that it will issue a proposed rule in May 2014. CSA has been successful in raising the profile of safety in the motor carrier industry and providing FMCSA with more tools to increase interventions with carriers. However, FMCSA faces two major challenges in reliably assessing safety risk for the majority of carriers in the industry and prioritizing the riskiest carriers for intervention. First, we found that the majority of regulations used to calculate SMS scores are not violated often enough to strongly associate them with crash risk for individual carriers. Second, for most carriers, FMCSA lacks sufficient safety performance information to ensure that FMCSA can reliably compare them with other carriers. FMCSA mitigates this issue by—among other things—establishing data sufficiency standards. However, we found that these standards are set too low, and by strengthening data sufficiency standards SMS would better identify risky carriers and better prioritize intervention resources to more effectively reduce crashes. Setting a data sufficiency standard involves tradeoffs between scoring more carriers and ensuring that the scores calculated are reliable for the purposes for which they are used. CSA has helped FMCSA reach more carriers and provided benefits to a range of stakeholders. Since CSA was implemented nationwide in 2010, FMCSA has intervened with more carriers annually than under SafeStat. From fiscal year 2007 to fiscal year 2012, FMCSA increased its number of annual interventions from about 16,000 to about 44,000, largely by sending warning letters to carriers deemed to be above the intervention threshold in one or more BASICs (see table 3). FMCSA and state partners also took advantage of new ways to investigate carriers, such as off-site investigations and on-site focused investigations, to complete 23 percent more investigations in fiscal year 2012 compared to fiscal year 2007 when only compliance reviews were used. In addition, CSA provides data for law enforcement and industry stakeholders about the safety record of individual carriers. For example, as part of the CSA program, FMCSA publicly provides historical individual carrier data on inspections, violations, crashes, and investigations on its website. According to law enforcement and industry stakeholders we spoke with, CSA organizes violation information for law enforcement and carrier data related to the BASICs help guide the work of state inspectors during inspections. Law enforcement officials and industry stakeholders generally supported the structure of the CSA program. These stakeholders told us that CSA’s greater reach and provision of data have helped raise the profile of safety issues across the industry. According to industry stakeholders, carriers are now more engaged and more frequently consulting with law enforcement for safety briefings. In Colorado, law enforcement officials told us that CSA has improved awareness and engagement within the motor carrier industry there. A state industry representative told us that CSA has improved safety because carriers are in a competitive business and can feel pressure to improve safety scores to gain an advantage over the competition. The relationship between violation of most regulations FMCSA included in the SMS methodology and crash risk is unclear, potentially limiting the effectiveness of SMS in identifying carriers that are likely to crash. According to FMCSA, SMS was designed to improve on its previous approach to identify unsafe motor carriers by incorporating into the BASICS all of the safety-related violations recorded during roadside inspections. For SMS to be effective in identifying carriers that crash, the violation information that is used to calculate SMS scores should have a relationship with crash risk. Carriers that violate a given regulation more often should have a higher chance of a crash or a higher crash rate than carriers that violate the regulation less often. However, we found that FMCSA’s safety data do not allow for validations of whether many regulatory violations are associated with higher crash risk for individual carriers. Our analysis found that most of the regulations used in SMS were violated too infrequently over a 2-year period to reliably assess whether they were accurate predictors of an individual carrier’s likelihood to crash in the future. We found that 593 of the approximately 750 regulations we examined were violated by less than one percent of carriers. Of the remaining regulations with sufficient violation data, we found 13 regulations for which violations consistently had some association with crash risk in at least half the tests we performed, and only two violations had sufficient data to consistently establish a substantial and statistically reliable relationship with crash risk across all of our tests. (For more information, see app. V.) FMCSA attempted to compensate for the infrequency of violations by, among other things, evaluating aggregate data to establish a broader relationship between a However, evaluations completed by group of violations and crash risk.outside groups have found weaker relationships between SMS scores and the crash risk of individual carriers than FMCSA’s evaluations of aggregate data (for more information, see app. IV). SMS is intended to provide a safety measure for individual carriers, and FMCSA has not demonstrated relationships between groups of violations and the risk that an individual motor carrier will crash. Therefore, this approach of aggregating data does not eliminate the limitations we identified. Most carriers lack sufficient safety performance information to ensure that FMCSA can reliably compare them with other carriers. As mentioned, SMS is designed to compare violation rates across carriers for the purposes of prioritizing intervention resources. These violation rates are calculated by summing a carrier’s weighted violations relative to each carrier’s exposure to committing violations, which for the majority of the industry is very low. About two-thirds of carriers we evaluated operate fewer than four vehicles and more than 93 percent operate fewer than 20 vehicles. Moreover, many of these carriers’ vehicles are inspected infrequently. (See table 14 in app. VI) Generally, statisticians have shown that estimations of any sort of rate—such as the violation rates that are the basis for SMS scores—become more reliable when they are calculated from more observations. In other words, as observations increase, there is less variation and thus more confidence in the precision of the estimated rate. Given that SMS calculates violation rates for carriers having a very low exposure to violations, such as operating one or two vehicles or subject to a few inspections, many of the SMS scores Carriers with based on these violation rates are likely to be imprecise.few inspections or vehicles will potentially have estimated violation rates that are artificially high or low and thus not sufficiently precise for comparison across carriers. Further, because SMS scores are calculated by ranking carriers in relation to one another, imprecise rate estimates for some carriers can cause other carriers’ SMS scores to be higher or lower than they would be if they were ranked against only carriers with more reliable violation rates. This creates the likelihood that many SMS scores do not represent an accurate or precise safety assessment for a carrier. As a result, there is less confidence that SMS scores are effectively determining which carriers are riskier than others. (App. II provides a more technical discussion of these issues.) For the five SMS BASICs for which FMCSA uses relevant inspections as a measure of exposure—Hours-of-Service Compliance, Driver Fitness, Controlled Substances and Alcohol, Vehicle Maintenance, and Hazardous Materials—estimated violation rates can change by a large amount for carriers with few inspections even when the number of their violations changes by a small amount. For example, for a carrier with 5 inspections, a single additional violation could increase that carrier’s violation rate 20 times more than it would for a carrier with 100 inspections. This sensitivity can result in artificially high or low estimated violation rates that are potentially imprecise for carriers with few inspections. As an example, our analysis of FMCSA’s method shows that among carriers for which we calculated a violation rate for the Hours-of- Service Compliance BASIC, violation rate estimates are more variable for carriers with fewer inspections. As shown in figure 1, violation rates tend to vary by a larger amount across carriers with few inspections than across carriers with more inspections. As a consequence, a high estimated violation rate for a carrier with few inspections may reflect greater safety risk, an imprecise estimate, or both. Further, comparisons among carriers are meaningful only to the extent they involve carriers with sufficient inspections and thus more precise estimated violation rates. Similar to carriers with few inspections, carriers with few vehicles are also subject to potentially large changes in their estimated violation rates, which can affect a carrier’s SMS scores. For the Unsafe Driving BASIC and the Crash Indicator, FMCSA measures exposure using a hybrid approach that considers a carrier’s number of vehicles and its vehicle miles traveled—when the latter information is available. Figure 2 shows that among carriers for which we calculated a violation rate using FMCSA’s method for the Unsafe Driving BASIC, carriers that operate fewer vehicles, for example fewer than 5, experience a greater range in violation rates per vehicle than carriers operating more vehicles, for example, greater than 100. (For similar results on other BASICs, see figures 10 to 16 in app. VI.) Researchers have raised additional concerns about the quality and accuracy of the data FMCSA uses to calculate SMS scores that could potentially compound the problems with the precision of violation rate estimates. These issues further limit the precision of carriers’ estimated violation rates, and consequently their SMS scores. For example: The frequency of an individual carrier’s inspections varies depending on where the carrier operates. States vary on inspection and enforcement practices. Some studies have shown that inspectors or law enforcement officers in some states cite vehicles for certain violations more frequently than in other states. Delays in reporting crash data to FMCSA, as well as missing or inaccurate data, can affect a carrier’s Crash Indicator SMS scores. These delays can vary by state. Data elements used to calculate violation rates for the Unsafe Driving BASIC and Crash Indicator are based on information that is self reported by the carrier. Inaccurate, missing, or misleading reports by a carrier could directly influence their SMS scores. Additionally, among carrier data we evaluated, more than 50 percent did not report their vehicle miles traveled to FMCSA. FMCSA acknowledges that violation rates for carriers with low exposure can be less precise and they attempt to address this limitation in two main ways, but the methods incorporated do not solve the underlying problems. As a result, SMS scores for these carriers are less reliable as relative safety performance indicators, which may limit FMCSA’s ability to more effectively prioritize carriers for intervention. FMCSA established minimum data sufficiency standards to eliminate carriers that lack what it has determined to be a minimum number of inspections, inspections with violations, or crashes to produce a reliable SMS score. For example, in the Hours-of-Service Compliance BASIC, FMCSA does not calculate SMS scores for a carrier unless it has at least three inspections and at least one violation within the preceding two years. In addition, as previously mentioned FMCSA applies another data sufficiency standard requiring a carrier to have a “critical mass” of inspections with violations in order for an SMS score to be a basis for potential intervention, or to be publicly displayed. While this approach helps address the problems for carriers with low exposure, it is not sufficient to ensure that SMS scores effectively prioritize the riskiest carriers for intervention. For most BASICs, we found FMCSA’s data sufficiency standards too low to ensure reliable comparisons across carriers. In other words, many carriers’ violations rates are based on an insufficient number of observations to be comparable to other carriers in calculating an accurate safety score. Our analysis shows that rate estimates generally become more precise around 10 to 20 observations, higher than the numbers that FMCSA uses for data sufficiency standards. However, the determination of the exact data sufficiency standard needs to based on a quantitative measure of confidence to fully consider how precise the scores need to be for the (For more information, see purposes for which the scores are used.app. II.) FMCSA groups the carriers meeting FMCSA’s data sufficiency standards for each BASIC into safety event groups in order to, according to FMCSA, “account for the inherent greater variability in violation rates based on limited levels of exposure and the stronger level of confidence in violation rates based on higher exposure.” based on inspections or inspections with violations depending on the BASIC or on crashes for the Crash Indicator. For example, the first safety event group in the Hours-of-Service Compliance BASIC includes carriers that received from 3 to 10 inspections; the second group includes carriers that received from 11 to 20 inspections, and so forth. Within each safety event group, FMCSA rank orders carriers by violation rate and assigns a percentile as an SMS score. CSA, CSMS Methodology, Version 3.0.1, Revised August 2013. exceed FMCSA’s intervention thresholds at disproportionately higher rates than carriers with more exposure. For example, FMCSA’s Hours-of- Service Compliance BASIC has five safety event groups. The group of carriers with the fewest number of inspections in each safety event group tends to have a higher percentage of carriers identified as above the intervention threshold than the group of carriers with a greater number of inspections (see fig. 3). This suggests that FMCSA’s methodology is not adequately accounting for differences in exposure, as it is intended to do, but rather is systematically assigning higher scores for carriers with fewer inspections. (See figs. 17 to 25 in app. VI for other BASICs.) FMCSA’s method of categorizing the carriers into safety event groups for the remaining BASICs also demonstrates how imprecision disproportionately affects small carriers. For the Unsafe Driving and Controlled Substances BASICs, FMCSA forms safety event groups based on the number of inspections with violations. Similarly, for the Crash Indicator, safety event groups are based on a carriers’ number of crashes. By using infractions or crashes to categorize carriers, FMCSA is not addressing its stated intent of having safety event groups account for differences in variability due to exposure. As a result, FMCSA derives SMS scores for the Unsafe Driving BASIC and the Crash Indicator by directly comparing small carriers with greater variability in their violation rates—including many carriers with a violation rate based on one vehicle—to larger carriers for which violations rates can be calculated with greater confidence. We found that among carriers that received an SMS score in Unsafe Driving, carriers with fewer than 20 vehicles are more than 3 times as likely to be identified as above the intervention threshold than carriers with 20 or more vehicles (see fig. 4). Of the carriers operating one vehicle, nearly all were identified as above the intervention threshold. (See figs. 26 to 32 in app. VI for other BASICs.) FMCSA contends that these results are expected because only small carriers that exceed critical mass standards receive an SMS score, and small carriers that exceed this threshold have demonstrated several occurrences of risky behavior despite their limited exposure. However, this illustrates the volatility of rates and the disproportionate effect a single violation can have given how FMCSA has structured SMS. For example, using FMCSA’s data sufficiency standards, a carrier with one vehicle (forty percent of the carriers in our analysis population have one vehicle) and two inspections with unsafe driving violations does not have sufficient information to be displayed or considered for intervention. However, a single additional violation, regardless of the severity of the violation, would likely mean that the carrier would be scored above threshold and prioritized for intervention. A relatively small difference in the number of violations could change a carrier’s status from “insufficient information”, to “prioritized for intervention” with potentially no interim steps. Conversely, a carrier such as this will have a very difficult time improving its SMS score to be below threshold. Our analysis shows that FMCSA could improve its ability to identify carriers at higher risk of crashing by applying a more stringent data sufficiency standard. As previously discussed, FMCSA uses SMS scores to identify carriers with safety performance problems—those above the threshold in any BASIC—for prioritization for intervention, and considers carriers with SMS scores above the intervention threshold in multiple BASICs as high risk. Overall, SMS is successful at identifying a group of high risk carriers that have a higher group crash rate than the average crash rate of all carriers that we evaluated. However, further analysis shows that a majority of these high risk carriers did not crash at all, meaning that a minority of carriers in this group were responsible for all the crashes. As a result, FMCSA may devote significant intervention resources to carriers that do not pose as great a safety risk as other carriers, to which FMCSA could direct these resources. Given the issues with precision discussed above, we developed and tested an alternative to FMCSA’s method that sets a single data sufficiency standard, based on the relevant measure of exposure—either at least 20 inspections or at least 20 vehicles (depending on the BASIC), and eliminates the use of safety event groups. This approach is designed to illustrate how a stronger data sufficiency standard can affect the identification of higher risk carriers and is not meant to be a prescriptive design to replace current SMS methods.effect that including carriers with low levels of exposure and highly variable violation rates can have on FMCSA’s prioritization of carriers for intervention. Using this illustrative alternative, we found that FMCSA would have more reliably identified a higher percentage of carriers that actually had crashed than when compared to its existing methods. (Apps. I and VI provide more detail on this approach.) Specifically: The result of this analysis demonstrates the This illustrative alternative identified about 6,000 carriers as high risk. During the evaluation period of our analysis, these carriers’ group crash rate was approximately the same as the rate for FMCSA’s high risk group (about 8.3 crashes per 100 vehicles). However, a much greater percentage of carriers (67%) identified as high risk using alternative higher data sufficiency standards crashed, and these carriers were associated with nearly twice as many crashes (see table 4). For five out of six BASICs, the Crash Indicator, and the high-risk designation, the illustrative alternative identified a higher percentage of individual carriers above the intervention threshold that actually crashed compared with FMCSA’s existing method. (See fig. 5.) Using both FMCSA’s method and the illustrative alternative, for most of the BASICs and the Crash Indicator the carriers identified above the intervention threshold had a higher crash rate (crashes per 100 vehicles) than those below the intervention threshold (see table 5). However, using FMCSA’s method, crash rates for the Controlled Substances and Alcohol BASIC have the opposite, negative association (3.2 crashes per 100 vehicles for carriers above threshold versus 5.2 crashes per 100 vehicles for carriers below threshold), whereas the illustrative alternative produces a positive association (4.7 crashes per 100 vehicles for carriers above threshold versus 3.8 crashes per 100 vehicles for carriers below threshold). Overall, these results raise concerns about the effectiveness of the existing SMS as a tool to help FMCSA prioritize intervention resources to most effectively reduce crashes. FMCSA’s existing SMS method successfully identified as high risk more than 2,800 carriers whose vehicles were involved in 12,624 crashes. However, FMCSA would have potentially prioritized limited resources to investigate more than 4,000 carriers that did not crash at all. Prioritizing resources to these carriers would limit FMCSA’s ability to reduce the number of overall crashes, resulting in lost opportunities to intervene with the carriers associated with many crashes. Implementing a stronger data sufficiency standard as presented involves tradeoffs between the number of carriers FMCSA can score, and the reliability of those scores. Our analysis found that by increasing the data sufficiency standards, fewer carriers would receive at least one SMS score (approximately 44,000 carriers in the illustrative alternative versus approximately 89,000 using FMCSA’s method). The carriers assigned an SMS score under the illustrative alternative accounted for 78.2 percent of all crashes during our evaluation period. FMCSA’s existing method scores carriers responsible for about 85.9 percent of all crashes (see table 6). On the other hand, by setting a higher standard for data sufficiency, the illustrative alternative focuses on carriers that have a higher level of road activity, or exposure, to more reliably calculate a rate that tracks violations and crashes over the 2-year observation period. In addition, exposure itself is a large determinant of overall risk, when defined as a combination of threat and consequence, and could be used as a factor to identify carriers that analysis suggest present a higher future crash risk. This is consistent with the results in table 4 above, which show that a larger proportion of the higher risk carriers in the illustrative alternative crashed and were associated with a larger number and proportion of crashes. Regardless of where the data sufficiency standard is set, using only SMS scores limits risk assessment for carriers that do not have sufficient performance information. Our analysis shows that using FMCSA’s existing method, about 28% of carriers have at least one SMS score, leaving approximately 72% of carriers without any SMS scores—largely due to insufficient information. The illustrative alternative scores fewer carriers—14%, leaving 86% of carriers without any SMS scores. However, according to an FMCSA official, there are other enforcement mechanisms to assess and place unsafe carriers out-of-service, including when a carrier fails to improve from an unsatisfactory safety rating during a comprehensive review, fails to pay a fine, or FMCSA determines a carrier is an imminent hazard. Further, the FMCSA official said carriers that do not receive an SMS score can still be monitored because the officials can initiate investigations and remove carriers based on complaints and other initiatives. For example, FMCSA conducts inspection strike forces targeting unsafe drivers and carriers in a particular safety aspect, such as drug and alcohol safety records. These tools used in conjunction with the performance data, including roadside inspection and crash data, could provide FMCSA with complementary means to assess and target carriers that do not otherwise have sufficient data to reliably calculate SMS scores. The safety scores generated by SMS are used for many purposes, thus the appropriate level of precision required depends on the nature of these applications. According to FMCSA’s methodology, SMS is intended to prioritize intervention resources, identify and monitor carrier safety problems, and support the safety fitness determination process. In setting a data sufficiency standard, FMCSA needs to consider how precise the scores need to be, and a score’s required precision depends on the purposes for which the scores are used. FMCSA officials told us the primary purpose of SMS is to serve as a general radar screen for prioritizing interventions. However, as discussed above, due to insufficient data, SMS is not as effective as it could be for this purpose. Further, if the same safety performance data used to inform SMS scores are intended to help determine a carrier’s fitness to operate, most of these same limitations will apply. According to FMCSA, the Safety Fitness Determination rulemaking would seek to allow FMCSA to determine if a motor carrier is not fit to operate based on a carrier’s performance in five of the BASICs, an investigation, or a combination of roadside and investigative information. FMCSA has postponed the planned rulemaking until May 2014. However, basing a carrier’s safety fitness determination on limited performance data may misrepresent the safety status of carriers, particularly those without sufficient data from which to reliably draw such a conclusion. In addition to using SMS for internal purposes, FMCSA has also stated that SMS provides stakeholders with valuable safety information, which can “empower motor carriers and other stakeholders…to make safety- based business decisions.” publicly released SMS scores stating that the data are intended for agency and law enforcement purposes, and readers should not draw safety conclusions about a carrier’s safety condition based on the SMS score, but rather the carrier’s official safety rating. Nonetheless, entities outside of FMCSA are also using SMS scores to assess and compare the safety of carriers. For example: FMCSA includes a disclaimer with the The Department of Defense has written SMS scores into its minimum safety criteria for selecting carriers of hazardous munitions. FMCSA has released a mobile phone application—SaferBus—that is designed to provide safety information, including SMS scores, for consumers to use in selecting a bus company. Multiple stakeholders have reported that entities such as insurers, freight shippers and brokers, and others use SMS scores. Given such uses, it is important that any information about SMS scoresmake clear to users, including FMCSA, the purpose of the scores, their precision, and the context around how they are calculated. Stakeholders have said that there is a lot of confusion in the industry about what the SMS scores mean and that the public, unlike law enforcement, may not understand the relative nature of the system and its limitations. CSA, CSMS Methodology, Version 3.0.1 Motor Carrier Preview, Revised August 2013. With the establishment of its CSA program, FMCSA has implemented a data-driven approach to identify and intervene with the highest risk motor carriers. CSA helps FMCSA to reach more carriers through interventions and provides the agency, state safety authorities, and the industry with valuable information regarding carriers’ performance on the road and problems detected during roadside inspections. GAO continues to believe a data-driven, risk-based approach holds promise and can help FMCSA effectively identify carriers exhibiting compliance or safety issues—such as violations or involvement in crashes. However, assessing risk for a diverse population of motor carriers—many of which are small and inspected infrequently—presents several significant challenges for FMCSA. As a result, the precision and confidence of many SMS scores is limited, a limitation that raises questions about whether SMS is effectively identifying carriers at highest risk for crashing in the future. As presented in the report, strengthening data sufficiency standards is one of several potential reforms that might improve the precision and confidence of SMS scores. However, strengthening data sufficiency standards involves a trade-off between assigning scores to more carriers and ensuring that those scores are reliable. Our analysis shows how improving the reliability of SMS scores by strengthening data sufficiency standards could better account for limitations in available safety performance information and help FMCSA better focus intervention resources where they can have the greatest impact on reducing crashes. In addition, if these same safety performance data are going to be used to determine whether a carrier is fit to operate, FMCSA needs to consider and address all identified data limitations, or these determinations will also be at risk. To improve the CSA program, the Secretary of Transportation should direct the FMCSA Administrator to take the following two actions: Revise the SMS methodology to better account for limitations in drawing comparisons of safety performance information across carriers; in doing so, conduct a formal analysis that specifically identifies: limitations in the data used to calculate SMS scores including variability in the carrier population and the quality and quantity of data available for carrier safety performance assessments, and limitations in the resulting SMS scores including their precision, confidence, and reliability for the purposes for which they are used. Ensure that any determination of a carrier’s fitness to operate properly accounts for limitations we have identified regarding safety performance information. We provided a draft of this report to the USDOT for review and comment. USDOT agreed to consider our recommendations, but expressed what it described as significant and substantive disagreements with some aspects of our analysis and conclusions. USDOT’s concerns were discussed during a meeting on January 8, 2014, with senior USDOT officials, including the FMCSA Administrator. Following this meeting, we made several clarifications in our report. In particular, FMCSA understood our draft recommendation to be calling for specific changes to its SMS methodology. It was not our intent to be prescriptive, so we revised our first recommendation to state that FMCSA should conduct a formal analysis to inform potential changes to the SMS methodology. In addition, we clarified in the analysis and conclusions our meaning of reliability in context of the purpose for which SMS is used. We are sending copies of this report to relevant congressional committees and the Secretary of Transportation. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. This report addresses the effectiveness of the Compliance, Safety, Accountability (CSA) program in assessing safety risk for motor carriers. To assess how effectively CSA assesses the safety risk of motor carriers, we reconstructed the models the Federal Motor Carrier Safety Administration (FMCSA) uses to compute the SMS scores for all six Behavior Analysis and Safety Improvement Categories (BASICs) and the crash indicator. We then assessed the effect of changes to key assumptions made by the models. Using data collected by the U.S. Department of Transportation’s Motor Carrier Management Information System (MCMIS) and historical SMS scores, and referencing the SMS algorithm and methodological documentation, we replicated the algorithm for calculating the SMS BASIC scores for the SMS 3.0 methodology. Reconstructing FMCSA’s models and replicating the SMS scores FMCSA produced for carriers was a necessary step to ensure that we understood the complexities of the models, the data used in the calculation of the SMS scores, and that the results we present in this report are comparable to FMCSA’s outcomes. To corroborate our models with FMCSA’s, we compared the SMS violation rates (measure scores) to FMCSA’s results for December 2012. We assessed the reliability of data used, for our purposes, by reviewing documentation on FMCSA’s data collection efforts and quality assurance processes, talking with FMCSA and Volpe National Transportation Systems Center officials about these data, and checking the data for completeness and reasonableness. We determined that the data were sufficiently reliable for the purpose of our data analysis. We established a population of about 315,000 carriers for analysis that were under FMCSA’s jurisdiction and showed indicators of activity over a 3 and a half year analysis period from December 2007 through June 2011. The criteria used to identify these carriers were: U.S.-based carriers; interstate or intrastate hazardous materials carriers; carriers with at least one inspection or crash during the 2-year analysis observation period (December 18, 2007 to December 17, 2009); and carriers with a positive average number of vehicle count at any point during the analysis observation period (December 18, 2007, to December 17, 2009) and at any point during the evaluation period (December 17, 2009, to June 17, 2011). During the first 2 years of this period, December 2007 through December 2009, we used each carrier’s inspection, crash, and violation history to calculate SMS scores. This period is referred to as the observation period. The remaining 18 months, December 2009 through June 2011, were classified as the evaluation period. We used data from this period to identify carriers involved in a crash and estimate crash rates for these carriers. For the approximately 315,000 carriers in our analysis, there were approximately 120,000 crashes during the evaluation period. We chose the lengths of time for observation and evaluation, in part, to match FMCSA’s effectiveness testing methods. We tested the effectiveness of SMS by identifying and making changes to key assumptions of the model. Given FMCSA’s use of these scores as quantitative determinations of a carrier’s safety performance, we assessed the reliability of SMS scores as defined by the precision, accuracy, and confidence of these scores when calculated for carriers with varying levels of carrier exposure—measured by FMCSA as either inspections or an adjusted number of vehicles. We tested changes to the following characteristics of the model: the SMS measures of exposure, the method used to calculate time weights, the organization of the violations to the six BASICs, and the data sufficiency standards. To evaluate the results produced by each model, including FMCSA’s, we examined the SMS scores and classifications of carriers into the high risk group. We compared the results from our revised models to the results from a baseline model, SMS 3.0. For each model, we measured whether carriers were involved in a crash, calculated group crash rates, and calculated total crashes in the evaluation period for carriers that were and were not classified as high risk in the observation period. Due to ongoing litigation related to CSA and the publication of SMS scores, we did not assess the potential effects or tradeoffs resulting from any public use of these scores. To determine the extent to which CSA identifies and intervenes with the highest risk carriers, we examined how our changes to FMCSA’s key assumptions affected the safety scores and identification of high risk carriers. Specifically, we identified the carriers with SMS scores above FMCSA’s intervention threshold in each BASIC and the carriers considered high risk according to FMCSA’s high risk criteria. Using this analysis, we designed an illustrative alternative method that incorporates the following changes: including only carriers with at least 20 observations in the following measures of exposure: driver inspections when calculating scores for the Hours-of- Service Compliance, Driver Fitness, and Controlled Substances BASICs; vehicle related inspections for the Vehicle Maintenance BASIC; vehicle related inspections where placardable quantities of hazardous materials are being transported for Hazardous Materials BASIC; and average power units for the Unsafe Driving and Crash Indicator assigning an SMS score to any carrier meeting these data sufficiency standards (e.g., 20 inspections), even if that carrier does not have any violations, was free of violations for 12 months, or had a clean last inspection; eliminating safety event groups because of the stricter data sufficiency using only the average number of vehicles as the measure of exposure for carrier’s assessed in the Unsafe Driving and Crash Indicator BASICs. Appendix VI provides the complete results of our replication of FMCSA’s existing SMS and our illustrative revision to it. We also examined the extent to which the regulatory violations that largely determine SMS scores can predict future crashes. We developed eight model groups to test the relationship between violations and violation rates, and crashes. We tested only the violations that had non- zero variance and observations for at least 1 percent of the test population. To control for small exposure measures when estimating rates, we estimated models comparing carriers’ observed crash status to Bayesian crash rates; used observed violation rates versus Bayesian violation rates; and compared a full model sample to a restricted model sample of carriers with at least 20 vehicles.sensitivity analysis to validate the predictive power of the models we developed. We ran multiple variations of these models to determine the number and types of violations that were predictive versus unstable. For We also conducted a more information on this specific analysis and model results, please see appendix V. In addition, we spoke with FMCSA officials in Washington, D.C., and at the Western Service Center and the Colorado Division Office in Lakewood, Colorado, and reviewed existing studies and stakeholder concerns about the SMS model and its outcomes. To understand the impact of CSA on law enforcement, we spoke with law enforcement officials at the Colorado State Patrol. We selected Colorado because it was one of the initial pilot states for CSA, and has been implementing the program since early 2008. We also interviewed representatives from industry and safety interest groups from the Colorado Motor Carriers Association, the Commercial Vehicle Safety Alliance, and the American Trucking Associations. Additionally, we attended meetings of the Motor Carrier Safety Advisory Committee’s CSA subcommittee and reviewed the minutes and related documentation from other meetings we did not attend. We also reviewed congressional testimony from industry and safety interest representatives from a September 2012 hearing for the House Transportation and Infrastructure Committee. We reviewed stakeholder comments submitted between March 2012 and July 2012 in response to FMCSA’s planned improvements to SMS. We conducted this performance audit from August 2012 to February 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The FMCSA Safety Measurement System (SMS) methodology involves the calculation of weighted violation rates for regulations within each of six Behavioral Analysis and Safety Improvement Categories (BASICs) and a given time period. (A seventh indicator measures weighted crash rates in previous time periods, or “crash history.”) Carriers are assigned to Safety Event Groups based on measures of their exposure to committing violations, such as the number of driver or vehicle inspections, depending on the BASIC, and the weighted violation rates are transformed into percentiles for carriers within the same group. These percentiles ultimately determine carriers’ alert or high-risk statuses. Because regulatory violation rates strongly influence SMS scores, the precision with which these rates can be calculated becomes important for developing reliable measures of safety, as we discuss in the body of this report. In this appendix, we summarize statistical methods for estimating rates and assessing their precision, or sampling error. We use these methods to estimate crash rates and their sampling error for a population of motor carriers that were active from December 2007 through December 2009. Carriers may vary widely in their level of activity, known as “exposure.” Both statistical theory and our analysis show that the precision of estimated rates for carriers with low exposure, measured by vehicles or inspections, is lower than for carriers with more exposure, and that rate estimates can become distorted to artificially low or high values for these low-exposure carriers. These results support our findings in the body of this report on the precision of FMCSA’s current approach to calculating safety risk scores and setting data sufficiency standards. Estimating rates of regulatory violations requires data on the number of violations that carriers incur within a given time period. If one makes the assumption that the number of violations is proportional to some measure of exposure (activity) and also assumes that the probability of observing violations within a large number of small independent exposure periods is small, the sampling error of a rate estimate decreases as exposure increases. Specifically, assume that each carrier in a population of interest has a unique violation rate, λ. For a fixed time period and known exposure, t, the number of violations, V, is distributed as V ~ Poisson (λ t), with E(V) = Var(V) = λ t. Since λ is unknown, it must be estimated from data on regulatory violations and exposure. The maximum likelihood (ML) estimator for a single carrier’s λ, given the model above, is λı� = v / t, with Var(λı�) = λı� / t = v / t.the rate estimate increases exponentially as exposure decreases. Accordingly, an estimated rate for a specific carrier and time period can vary substantially from λ, particularly when exposure is low. SMS is primarily concerned with measuring how regulatory violation rates vary over a population of active motor carriers. Even though ordinary methods of estimating these rates are unbiased and consistent, the collection of estimated rates for the population, 𝛌� = {λestimates, such as the percentiles that SMS uses to place carriers into alert and high-risk status, may be similarly prone to error. Empirical Bayesian methods correct for this problem by estimating λı� for each carrier to better estimate the distribution of rates across a population. Bayesian methods prevent estimates from converging to artificially extreme values for carriers whose raw rate estimates are based on small samples (low exposure). The estimator does this by effectively “borrowing information” from other, larger carriers whose rates can be estimated more precisely. In the evaluation of the CSA Pilot Test for FMCSA, the University of Michigan Transportation Research Institute used empirical Bayesian rate estimation methods to evaluate the association between SMS scores and crash risk, and cited similar benefits to those we discuss here. For example, see Roger J. Marshall, “Mapping Disease and Mortality Rates Using Empirical Bayes Estimators,” Journal of the Royal Statistical Society, Series C (Applied Statistics) 40, no. 2 (1991): 284, or J. N. K. Rao, Small Area Estimation (Hoboken, NJ, 2003), 206. Specifically, assume that regulatory violation rates over a population of carriers are distributed as λı� ~ Gamma(α, β), the prior distribution of the parameter of interest. Parameter values for the prior distribution can be assumed, based on historical data on the population of interest, or estimated using a particular sample. Conditional on these rates, the data on regulatory violations are distributed as V | λ , t ~ Poisson(λ t), and the posterior distribution for a specific carrier is given by λ | v, t ~ Gamma(α + v, β + t) (1) Since the mean of a Gamma variate is α / β and the variance is α / β, the posterior mean and variance of the rate for a given carrier are given by E(λ | v , t) = (α + v) / (β + t) (2) Var(λ | v , t) = (α + v) / (β + t), and the mean of the prior distribution, α / β. When enough data are available, as indicated by a large exposure term relative to the violation term, the estimate converges to the ordinary, carrier-specific rate estimate. When exposure is low, however, the method combines data from the specific carrier with the mean rate for all carriers. The variance of Bayesian rate estimates decreases with increased exposure, similar to the variance of ordinary rate estimates. Figure 6 shows how hypothetical rate estimates and 90% posterior intervals for a carrier that experienced 5 crashes vary with the carrier’s exposure, as measured by the number of vehicles. (Although we illustrate rate estimation issues using crash rates, we likely would have obtained similar results if we had estimated regulatory violation rates.) As expected, the precision of the estimates decreases exponentially as the number of vehicles increases. The variance is high in the range of 1 to 5 vehicles and begins to decrease less quickly at approximately 20 vehicles, consistent with our discussion in the body of this report and prior evaluations of SMS. Thresholds in this approximate range are consistent with criteria used by the Centers for Disease Control and Prevention (CDC) to suppress or caveat rate estimates for the purpose of public display. For example, in its compendium of health statistics in the United States, CDC cautions that “hen the number of events is small and the probability of such an event is small, considerable caution must be observed in interpreting the conditions described by the figures.” Even though the Bayesian estimates do not converge to extremely low or high values when exposure is low, the uncertainty around the estimates remains high. As figure 6 shows, statistical methods for modeling and estimating rates can quantify this uncertainty explicitly, in order to reflect the varying precision of estimates for motor carriers with more or less observed data. Although the amount of uncertainty that is acceptable in practice depends on the purpose of the estimates, both statistical theory and government agencies estimating rates similar to those involved in the calculation of SMS scores have recognized the need to express the uncertainty of these estimates, particularly when the derived from small samples. This contrasts with FMCSA’s approach, which reports SMS scores as safety risk estimates with no quantitative measures of precision. To illustrate the rate estimation issues discussed above in the context of motor carrier safety, we estimated individual crash rates for a population of motor carriers that were actively operating in each of two time periods, December 2007 through December 2009, and December 2009 through June 2011, as measured in FMCSA’s Motor Carrier Management Information System (MCMIS). An “active” carrier was one that, in each time period, had at least one inspection or crash and had been recorded as a US-based interstate or intrastate Hazmat carrier. This definition resembled the one we used in replicating SMS, as described in the body of this report and appendix I. We obtained these data from the December 2010 and December 2012 MCMIS “snapshot” data files, as well as a historical file of carrier-specific information that covered all snapshots. We estimated the raw and empirical Bayesian crash rates for each carrier in the first time period, using data on the number of crashes and vehicles for these carriers and the formulas above. We used the “empirical Bayes” version of the rate estimator, in which the parameters of the prior distribution were estimated from the data. Specifically, we fit the observed rate data for all carriers in the first time period to the negative binomial distribution, parameterized with exposure measured by number of vehicles, and estimated α and β using standard methods of maximum likelihood estimation. The final rate estimates for each carrier were a combination of these parameter estimates and carrier-specific data, according to equation 2 above. As theory would predict, Bayesian methods prevented crash rates from converging to zero or extremely high values for carriers with low exposure. The left half of figure 7 presents the raw crash rates for our analysis carriers, while the right half presents the empirical Bayesian estimates. The raw estimates for carriers with about 1 to 10 vehicles can be 10 to 20 times higher than for carriers with more than 10 vehicles. In addition, the raw rates cluster at zero for a large number of carriers, particularly for those with low exposure. An underlying crash rate of zero is implausible for active carriers. In contrast, the Bayesian rate estimates are more stable, with no inflation or deflation to extreme values. Since the body of this report finds that 93 percent of carriers in our replication of SMS had fewer than 20 vehicles, Bayesian methods may provide more stable estimates for many specific carriers and may better approximate the distribution of rates across carriers. In addition to stabilizing rates for small carriers, Bayesian rate estimation methods provide an explicit measure of precision for each carrier’s rate, regardless of size. In figure 8, we show the Bayesian rate estimates for a random sample of 109 carriers in the first period of our analysis population, along with 90 percent Bayesian posterior intervals. (We present these results for a sample to make the intervals readable.) The posterior interval expresses the range over which the true rate exists with a 90 percent probability. Consistent with theory, the precision of the rate estimates increases with exposure—in this case, the number of vehicles. These results apply to actual carriers in the sample, but the results are consistent with those expected by theory. The width of the posterior intervals does not decrease monotonically, however, because the relative number of crashes also affects the variance and is not held constant in the plot. In this appendix, we express the Safety Management System (SMS) as a statistical measurement model, in order to make its assumptions explicit, and describe how estimating the model could validate those assumptions. We find that FCMSA’s SMS makes a number of strong assumptions about motor carrier safety that empirical data cannot easily validate. The SMS uses administrative data on inspections of commercial motor carriers, violations of regulations, and crashes to measure carrier safety. Statisticians and other researchers have developed methods to validate measures of such broad concepts as safety, referred to as “latent variables,” using empirical data. These methods are known as “measurement models.” For example, mental health professionals have created scales to measure the existence of broad disorders, such as depression, by combining responses to multiple items on patient questionnaires. SMS has a similar goal: to create scales to measure motor carrier safety risk on several dimensions, such as “Unsafe Driving” or “Vehicle Maintenance,” by combining violation rate data across multiple regulations. Latent variable measurement methods can assess whether these broader measures are valid and reliable, and whether the empirical indicators that go into them actually measure the intended concepts. Estimating the degree to which various indicators measure a broader concept helps confirm and often improve the reliability and validity of the scales constructed. Much of the SMS involves calculating weighted regulatory violation rates for motor carriers in a given time period. FMCSA assigns weights that, in principle, reflect the violations’ associations with one of six dimensions of safety, known as Behavioral Analysis and Safety Improvement Categories (BASICs), such as “Unsafe Driving” and “Vehicle Maintenance.” The weights represent what FMCSA considers to be the strength of each violation’s association with safety, relative to other violations in the same BASIC. All violations that are categorized in a BASIC get a positive weight ranging from 1 to 10, which implies that they have some association with safety. These weighted violation rates strongly influence the final SMS measures of safety on these dimensions. Each BASIC is linked to a set of violations, which are all assumed to measure the same dimension of safety. Each violation maps to exactly one BASIC, though BASICs map to multiple violations in their associated groups. . Vij measures the number of times that carrier i violated regulation j in a given time period. 𝜆𝑗 is a weight for each violation. It is the product of a “severity” weight, measuring what FMCSA considers the violation’s “crash risk relative to the other violations comprising the BASIC measurement,” in addition to outcomes thought to be particularly severe (e.g., out-of- service violations), and a time weight, measuring what FMCSA considers the importance of violations from different time periods to estimating a carrier’s current level of safety. By defining Vij for fixed time periods, such as 6 or 12 months prior to the measurement time, we collapse the separate weights used in SMS into 𝜆𝑗, in order to simplify the notation. Lastly, T measures exposure to committing violations in the time period, which is either a function of carrier’s vehicles and vehicle miles traveled (VMT) or the time-weighted sum of relevant inspections, depending on the BASIC. SMS transforms the weighted violation rates for each carrier into percentile ranks, after applying a number of “data sufficiency standards” to exclude carriers with few violations, inspections, and/or vehicles. Carriers with percentiles that exceed established thresholds are “alerted” on the relevant BASICs and, if enough alerts or other conditions exist, are identified as “high risk.” As a result, the ultimate measures of safety risk are ordered groups, with cut-points defined by BASIC percentiles for carriers that meet FMCSA’s standards for data sufficiency. The SMS can be viewed as an attempt to measure latent concepts of “safety,” such as “Unsafe Driving” or “Vehicle Maintenance,” using observed data on regulatory violations and the opportunity to commit them (exposure). Consider the latent variable measurement model below, using notation from a prominent textbook: The weights describing the relationship between the latent and observed The model assumes that a vector of 𝑘=∑ 𝑛𝑔𝑝𝑔 observed variables, r, are determined by p latent variables, 𝜉, and random measurement error, 𝛿. variables make up the block diagonal matrix Λ, with p blocks of weights applied to the corresponding blocks of observed variables. This structure latent variable. In many applications, the model assumes that Cov(𝜉 implies that each group of observed variables is related to exactly one , 𝛿)=0 and E(𝛿 ) = 0 but allows other variances and covariances to be estimated from the data as parameters or fixed to known values. The SMS is a particular form of the model above. Specifically, SMS defines r as violation rates for k = 826 regulations, where r may include variables measured at different times. It sets p = 6 and relates the violation rates to the BASICs, or latent variables 𝜉 measuring safety, through the weighting matrix Λ. FMCSA created fixed time and severity assumes that 𝛿=0. A graphical version of SMS as a measurement weights for each regulation through a combination of statistical analysis and the opinions of stakeholders. Since SMS is not a stochastic model, it model appears in figure 9 below. When expressed as a measurement model, the strong assumptions of SMS —and their potential detrimental effect on its usefulness—become clear. FMCSA’s assumption of zero measurement error is unusual for statistical approaches to measurement, given that any particular violation is likely to represent variation in latent variables (in this case, safety) as well as unmeasured variables summarized by the error term. SMS makes specific assumptions about the number of safety dimensions—the latent variables assumed by the model above—as well as their relationships to violation rates. Exactly six dimensions of safety exist (involving regulations), and each violation rate measures only one of them. In other efforts to measure broad concepts using numerous indicators, inference about the existence and relationships among observed and latent variables are endogenous parameters (determined by the model) to be estimated, rather than exogenous parameters (determined outside the model) that are fixed ex ante, ahead of time, as they are here. Finally, SMS takes the unusual step of fixing the values of the weights relating the latent variables measuring safety to violation rates at values other than 0. This assumes a high degree of prior knowledge about the relationships between latent and observed variables. Although FMCSA has conducted several studies of how regulatory violation rates are associated with crash risk, these studies do not directly estimate the degree to which each type of violation reflects one of several dimensions of safety. One approach to validating the assumptions of SMS is to estimate the parameters of the measurement model above using empirical data on regulatory violation rates. This approach is known as Confirmatory Factor Analysis, which is a special type of measurement model. Because SMS makes specific assumptions about the number of BASICs and the violations that go into them, we can express the system as a measurement model, as discussed above, and estimate the degree to which its assumptions are consistent with reality. For example, SMS assumes that six dimensions of safety exist—labeled BASICs in SMS— and that each violation reflects only one dimension. However, a model that assumes three BASICs and allows violations to reflect multiple dimensions of safety might be a plausible alternative. High violation rates for brake maintenance regulations may indicate worse performance on both the Vehicle Maintenance and Unsafe Driving dimensions of safety. Measurement modeling can identify which of these approaches better fits empirical patterns of regulatory violations. More generally, analyzing SMS as a measurement model can validate its assumptions, such as the values of the severity and time weights, and suggest improvements to better measure safety. We can extend the SMS measurement model to predict empirical data on crash risk, in order to further validate its ability to identify high-risk carriers. This structural equation modeling (SEM) approach combines the measurement model above with a model that describes how the latent dimensions of safety predict crash risk, generically known as “endogenous observed variables.” To incorporate outcomes, we extend the measurement model above to assume that the six BASICs are directly related to an empirical measure of crash risk: C measures crash risk; 𝛾 are parameters describing how the latent safety dimensions are related to crash risk; 𝜉 are the safety dimensions; and 𝜀𝑖 parameters describing how the SMS scores relate to crash risk, 𝛾. Strong is a random error term. Estimating this larger model would yield the original parameters of the measurement model, in addition to the correlations between SMS scores and crash risk would further support their ability to identify higher-risk carriers. This is known as “criterion validity” in statistics and social research. A key strength of this validation approach is that it accounts for the error in measuring broad dimensions of safety when predicting crash risk. Because empirical data on violation rates and SMS scores are indicators of latent concepts of safety, measurement error can distort the underlying relationships between these broader concepts and crash risk. For example, poor vehicle maintenance may be positively associated with higher crash risk, but empirical data on violations of vehicle maintenance regulations may measure both the concept of interest and the enforcement efforts of state and local governments. As a result, the violation rates may be uncorrelated with crash risk simply due to error in measuring the concept of interest. SEM models estimate the relationships among latent variables more precisely by accounting for this measurement error. This contrasts with simpler regression models of crash risk as a function of observed violation rates, which assume that violation rates measure the dimensions of safety without error. Previous evaluations of SMS have focused on estimating the correlations between crash risk and regulatory violation rates and Safety Measurement System (SMS) scores. These evaluations have found mixed evidence that SMS scores predict crash risk with a high degree of precision for specific carriers or groups of carriers. This appendix synthesizes the results of these prior evaluations. Several prior evaluations of SMS have analyzed grouped data, rather than directly analyzing how a carrier’s individual regulatory violation rates and SMS scores predict its own future crash risk. For example, in a pilot evaluation conducted for FMCSA, the University of Michigan Transportation Research Institute (UMTRI) estimated group crash rates within percentiles of SMS scores for each Behavioral Analysis and Safety Improvement Category (BASIC), pooling several hundred carriers in each percentile, to trace out the aggregate relationship between SMS scores and crash risk. Similarly, FMCSA’s Violation Severity Assessment Study analyzed grouped violation data from roadside inspections conducted from 2003 through 2006, in order to compare rates cited in post-crash reports to rates in the general population of carriers. Ibid., 4-2, 4-6. that did and did not exceed the SMS thresholds to be placed in “alert” or “high risk” statuses. Aggregate approaches, such as those used in several prior evaluations, do not directly assess the ability of SMS and regulatory violations to predict future crash risk for specific carriers. Well-known findings in statistics on “ecological fallacies” show that associations at higher levels of analysis are not guaranteed to exist at lower levels of analysis. In this application, carriers that crash may have higher violation rates or SMS scores as a group than carriers that do not crash, but this pattern does not necessarily apply to specific carriers within the groups. Because less variation exists at the carrier level, aggregation can overstate the strength and precision of these correlations for individual carriers. Even when similar correlations exist at the carrier level, comparing average crash rates for SMS percentiles or risk groups does not assess the prediction error for any particular carrier. The average crash rate may be higher for groups of carriers with increasingly high SMS percentiles, but crash rates may vary significantly around these means. This residual variation, not differences in means or other aggregate statistics, is more directly relevant for assessing the quality of predicted crash rates for a particular carrier. In statistical terms, the prediction error summarized by the residual variance of a linear regression model or the classification matrix of a categorical model is what matters for assessing predictive power for individual carriers, not the models’ coefficients, which estimate mean crash rates conditional on these percentiles. Thus, it is not surprising that previous evaluations of carrier-level data have found weaker relationships between crash risk and SMS scores and regulatory violations than have the evaluations of aggregated data. UMTRI estimated the relationship between exceeding thresholds in the six non-crash BASICs and mean crash rates, using an empirical Bayesian negative binomial model estimated on carrier-level data. The results showed that carriers exceeding the thresholds for the Unsafe Driving and Vehicle Maintenance BASICs had average crash rates that were 1.1 to 1.8 times higher than carriers not exceeding the thresholds—usually lower than the rate ratios of 1.0 to 5.4 reported by UMTRI’s aggregate analysis and FMCSA’s December 2012 Effectiveness Testing. However, this relationship was negative for the Driver Fitness and Loading/Cargo (currently Hazardous Materials) BASICs, with mean crash rates for alerted carriers that were 0.85 and 0.91 times the rates of non-alerted carriers, respectively. The ratios were not significantly greater than 1 for the Fatigued Driving and Substance Abuse/Alcohol BASICs. Similarly, the American Transportation Research Institute (ATRI) found that alerted carriers in the Unsafe Driving, Vehicle Maintenance, Hours-of-Service, and Controlled Substances/Alcohol BASICs had mean crash rates that were 1.3 to 1.7 times larger than scored carriers not in alert status, but carriers exceeding the Driver Fitness thresholds had mean crash rates that were 0.87 times those of non-alert scored carriers. Although UMTRI and ATRI analyzed carrier-level data, they validated SMS measures using regression coefficients and similar statistics that describe aggregate correlations. As we discuss above, this approach does not directly quantify predictive power for specific carriers. Two studies that have directly estimated prediction error for specific carriers, conducted by Wells Fargo Securities and James Gimpel of the University of Maryland, found weaker evidence of the model’s predictive effectiveness. Gimpel found that mean crash rates increased by small amounts as SMS scores increased on the Unsafe Driving, Hours-of- Service, and Vehicle Maintenance BASICs increased.found a similarly positive association for the Unsafe Driving BASIC, but a Wells Fargo negative association for the Hours-of-Service BASIC, in its analysis of 4,600 carriers with at least 25 vehicles and 50 inspections. More critically, the authors showed that scores on these BASICs predict crash rates with a large amount of error, with most R-squared fit statistics ranging from nearly zero to 0.07 for reasonably large analysis samples. Although these studies do not report critical estimates of the residual variance, the R-squared statistics likely imply confidence intervals around predicted crash rates for individual carriers with widths that are several times larger than the predictions themselves. This implies that SMS scores predict future crash risk for specific carriers with substantial error, even though mean crash rates can be higher among carriers with higher SMS scores. FMCSA used aggregate data to dispute the findings of the Wells Fargo evaluation. Specifically, the agency cited the UMTRI findings that aggregate crash rates were 3.0 to 3.6 times higher for carriers exceeding thresholds for the Unsafe Driving and Hours-of-Service BASICs than for carriers that did not exceed thresholds for any BASIC. In addition, FMCSA highlighted analyses by UMTRI and the Volpe Center of aggregate crash rates across percentiles of SMS scores in the Unsafe and Fatigued Driving BASICs, respectively, which they claimed to show a stronger correlation to crash risk. FMCSA’s approach to evaluating the predictive power of SMS scores resembles its Effectiveness Testing, which compares aggregate crash rates for carriers above and below thresholds for various BASICs. However, as we discuss above and Wells Fargo discussed in its response to FMCSA, the fact that SMS scores predict aggregate crash rates more strongly at the alert-group or percentile level does not necessarily imply that the scores will predict the crash risk of individual carriers. Recognizing this, the UMTRI evaluation analyzes the data at both the aggregate and carrier levels, and finds that mean crash rate ratios are far smaller at the carrier level than at the alert-group or percentile levels. It should be intuitive that aggregate evidence of effectiveness, stressed in some FMCSA evaluations, shows stronger predictive power than the carrier-level analyses of ATRI, Gimpel, UMTRI, and Wells Fargo. Aggregating violation and crash rates within larger groups effectively increases the sample size used to calculate rates, which reduces their sampling error when compared to the equivalent carrier-level measures. The reduction of sampling error can strengthen the correlations between violation rates and SMS scores and crash risk. Evaluations of SMS that focus on carrier-level prediction error provide the most appropriate evidence of effectiveness for assessing the safety of individual carriers. FMCSA has stated that one purpose for SMS scores is to predict the future crash risk of individual motor carriers, in order to prioritize resources for intervention and enforcement. In addition, FMCSA reports SMS scores as measures of safety on a public website and the SaferBus Mobile app. To assess the validity of SMS scores for this purpose, evaluations should focus on the system’s ability to predict the crash risk at the carrier level, not its ability to identify groups of carriers with larger crash rates on average or collectively. Measures of predictive accuracy—such as the residual error made when predicting crash rates or the classification error made when assigning carriers to risk groups— are the critical metrics of success, not aggregated crash rate ratios and regression coefficients. When evaluated on these criteria, prior studies show that SMS predicts future crash risk for individual carriers with substantial imprecision. None of the prior studies has explicitly incorporated measurement error into evaluations of SMS. Since SMS is ultimately a method of creating measures of latent variables, as we discuss in appendix III, the regulations used to calculate scores and the scores themselves have some degree of measurement error. Because existing studies have used statistical methods that assume zero measurement error, more comprehensive attempts to model the measurement structure of SMS and validate its assumptions and predictive power, such as those we discuss in appendix III, may produce different results. The correlations among SMS scores, violation rates, and crash risk may reflect measurement error as much as the underlying relationships among the variables of interest. This more complex analysis is critical for future evaluations of SMS and its ability to measure safety risk. As a more basic approach to validating SMS, which focuses on the ability of data on regulatory violations in one time period to predict crash risk in a subsequent period, we analyzed the relationship between violation rates and crash risk using a series of statistical models. These models predicted the probability of a crash and crash rates as a function of regulatory violation rates for a population of motor carriers that were actively operating over a recent 3.5-year time period (described below). We find that a substantial portion of regulatory violations in SMS cannot be empirically linked to crash risk for individual carriers. Consistent with prior research, about 160 of the 754 regulations with data available in this time period had sufficient variation across carriers for analysis. Of the approximately 160 regulations with sufficient violation data, less than 14 were consistently associated with crash risk, across statistical models. These results suggest that the specific weights that SMS assigns to many regulations when calculating safety risk cannot be directly validated with empirical data, and many of the remaining regulations do not have meaningful associations with crash risk at the carrier level. We assembled data for a population of motor carriers using the MCMIS snapshot files dated December 2010 and 2012. Specifically, we identified carriers that were actively operating in each of two time periods: from December 2007 through December 2009 (the “pre-period”) and from December 2009 through June 2011 (the “post-period”). We defined an active carrier as one that is as outlined in Appendix I, consistent with FMCSA’s definition of active carriers for its Effectiveness Testing and other analyses. For each of the approximately 315,000 carriers that met these criteria, we extracted data on the number of regulatory violations and crashes incurred in each time period, along with the number of inspections, vehicles, and use of straight versus combo trucks, among other variables, from the crash and inspection tables in MCMIS. The goal of our analysis was to predict crash risk in the post-period, using data on regulatory violations, crash data, and carrier characteristics measured in the pre-period. We developed a series of linear and generalized linear regression models to predict two measures of crash risk for individual carriers: a binary indicator for having crashed in the post-period and the ratio of crashes to vehicles. Estimating and evaluating all potential models and model types was not the goal of these analyses. Rather, we sought to estimate the associations between regulatory violation rates and crash risk at the carrier level, in order to validate the violations’ severity weights in SMS. We reduced the list of 754 regulations whose violations are tracked in MCMIS to those that had enough variation across carriers for analysis. After excluding 593 violations that had zero variance or zero counts for more than 99 percent of the analysis carriers, we retained data on the violation of approximately 160 regulations for use in predicting crash risk. As we discuss in appendix II and the body of this report, crash and violation rates based on small exposure measures, generally resulting from carriers with few vehicles, may be estimated with less precision than rates based on larger exposure measures. To better understand and attempt to overcome these rate estimation issues and assess the sensitivity of our results, we used both ordinary and empirical Bayesian estimators of crash and violation rates. In addition, we estimated separate models limited to carriers that had more than 20 vehicles. These methodological choices produced 8 groups of models, as described in table 7. The groups were defined by the combined categories of crash measure (binary crash status versus Bayesian crash rate), methods of violation rate estimation (ordinary versus Bayesian), and carrier size (full data or restricted to more than 20 vehicles). These parallel analyses allowed us to assess the sensitivity of our results to different assumptions. For each of the eight model groups, we include three sets of covariates to predict crash risk in the post-period: “Simple model:” indicator (binary) for crashing in the pre-period, carrier size, and carrier type (percent straight versus combo). “Full model:” predictors in the simple model, plus all violation rates with viable data in the pre-period. “Stepwise full model:” We applied a stepwise selection algorithm applied to all predictors in the “full model,” in order to select the most predictive covariates. The algorithm’s constraints required a p-value of 0.30 for a covariate to enter the model and 0.35 to remain in the model. To avoid over-fitting our models to any particular sample of data, we divided our data using a random method to form a model-building sample and a validation sample. We used the model-building sample to estimate the models described above and the validation sample to assess the accuracy of the model’s predictions of crash probability against new data. When seeking to develop statistical methods for predictive purposes, this type of out-of-sample validation is extremely useful to ensure that any method identified can consistently predict well on all samples of data, not just the sample that was used to develop the method. This is an important limitation of prior evaluations of SMS, which, to our knowledge, have not used replication samples to avoid over-fitting when identifying predictive violation types or methods of identifying higher-risk carriers. Model selection required addressing statistical estimation issues, such as instability of the parameter estimates caused by co-linearity of predictors or lack of variability in the predictors, and other model fitting concerns. For the linear crash rate models, the dependent variable required a log transformation to remove non-constant error variance, which would invalidate results if left untreated. These statistical issues resulted in sub- models within the major model groups that were explored until a stable model resulted. Therefore, the results within each model group focus on three sub models, when applicable: simple, stepwise and full, where stepwise is the model that eliminated independent variables until a stabilized model with estimable coefficients resulted. See table 8 for the final list of 30 models and subsamples. Models that use the SMS violation information do not fit well according to various measures discussed below. In addition, the violation rates, as measured in SMS, do not have a strong predictive relationship with crashes, regardless of whether the observed or the Bayesian violation rates are used as inputs. Models for crash status (yes/no) were examined for stability of parameter estimates, fit statistics, number and types of violations that were predictive and that were stable, and future predictive performance according to these measures. Models for Bayesian crash rates were examined for stability of parameter estimates, fit statistics, number and types of violations that were predictive, predictive power and future predictive power. Some of the diagnostics cannot be compared in absolute terms, but rather should be compared across models fit to the same data. For example, the AIC must be compared across competing models fit on the same data. The crash status (yes/no) model was evaluated in the out-of-sample validation data, where each model was re-fit on the validation sample, and the diagnostics were examined and compared to those from the model-building sample. As an additional sensitivity analysis, the same set of inputs for each of the model groups one through four were also fit using a Bayesian crash rate outcome, via a linear regression fit to the model- building sample. Results were compared. Since diagnostics will differ according to the outcome measure, crash status (yes/no) versus crash rate, information for these outcome types is displayed separately. For results of models for the crash status (yes/no), see tables 9 and 10. For results for the Bayesian crash rates, see table 11. Given that a high value of the H-L p-value (close to 1) indicates good model fit, according to this measure, most of the models fail to fit acceptably, and none of the models fit well. Within the same data, a lower value of the AIC indicates better fit; therefore, the stepwise models perform best, and do nearly as well regarding the ROC and generalized R-squared when compared to the more complicated full model. But even for the stepwise models, the ROC and R-squared do not indicate a strong predictive relationship. This finding is echoed by the number of effects in the model, relative to the number of potential violations (about 160) and the number of stable effects. One aspect of predictive power is the ability for a model to discriminate the observed outcomes based on model predictions. Classification tables describe a model’s classification accuracy with correct and incorrect classifications, as measured by sensitivity (correctly predict an event) and specificity (correctly predict a non-event), and false positive (incorrectly predict a non-event) and negative rates (incorrectly predict an event). Classification tables for the simple, full, and stepwise model within a model group are presented in table 10. The observed proportion of crashes, approximately 0.2 for the unrestricted data and 0.66 for the data restricted to carriers with more than 20 vehicles, is used as the cut-point to classify predicted probabilities for a carrier into a predicted event (crash) versus non-event (no crash). The predicted crash status for a particular model is compared to the actual post-crash status, resulting in a series of table rows, one for each model, that examine the false positives, false negatives, and other quantities that help evaluate the predictive quality of a model. For unrestricted data, the false negative rate (or the rate that results from incorrectly classifying a carrier to a non-alert status), is relatively low (around 11 percent) compared to the false positive rate (ranges from about 56 to 58 percent). This is a desired result if it is considered more appropriate to be conservative and put a carrier in alert status, even if that alert status is incorrect (false positive), compared to misclassifying a carrier into non-alert when an alert would be called for (false negative). The restricted data have a higher false negative rate (from 42 to 44 percent) than false positive rate (around 14 to 19 percent), and this false negative rate is also higher than the full data false negative rate. For the restricted data with higher false negative rates, this means a higher percentage of carriers are being classified in non-alert when they have crashed than the percent classified as alert, but that did not crash, and such a scenario is not desirable under a conservative preference toward low false negative rates. In addition, the sensitivity and specificity are both moderate at best within data (restricted versus full), further evidence of the inability for models to discriminate. To address whether crash status (yes/no) has a different relationship with violations than the crash rate, we compare conclusions of crash status (yes/no) versus crash rate models. Examining sensitivity to the prediction of crash status (yes/no) versus crash rate, the stepwise selected model will be compared to logistic regression results for the model-building and the validation sample (see Table 11).model indicates that the numbers of effects that are related to crash rate are small, and that the better fitting models tend to have only a few predictors included. Specifically, Mallow’s Cp statistic indicates a model is preferable when Cp is around or smaller than the number of effects (p), and the model is more parsimonious than competing models. The model fit to the restricted data, where carriers have greater than 20 vehicles, (stepwise model number 22), includes only 34 stable effects, and 72 effects altogether, but the model fit is more stable (i.e., relatively fewer unstable effects) and has the best (lowest) Cp, while also having similar explained variance and low AIC. However, it is interesting to note that the simple model, model 21, performs similarly according to some measures, such as Root MSE and R-squared, though this model does not contain violation rate information. Comparing how well the models perform when applied to the validation sample that consists of new observations——which are not included in the model-building sample—informs the precision of SMS with respect to predicting crashes. We examine the number of violations and the violation types that are included across the model groups (logistic and linear) and sub-models (stepwise and full). We compare this to the number of models within which each violation was found to be a significant and a stable predictor of crash outcomes. Importantly, of the reduced set of approximately 160 violations considered, only 13 violations were significant in at least half of the 24 models that incorporate violations (i.e., stepwise and full models). There were 10 different possible models for the logistic model-building sample, and these were also evaluated on the validation sample and on the model-building sample, but with a linear regression setting, resulting in 30 possible models. However, we regarded only 24 of these 30 models as informative since we exclude the 6 simple models that ignore the pre- violation information. Of the violations considered, only speeding (violation 3922S) and failure to use a seatbelt while operating CMV (39216) were significant and stable in all 24 models. A similar picture arises for some other violations, though many of the models did not result in a significant relationship between the violation in question and the crash outcome, as indicated in table 12. Only 41 violations were significant in 5 or more models out of 24. However, even for the top 13 violations with respect to frequency of significance and stability across the 24 models, predictive power is still affected by poor model diagnostics. This is echoed in the results from the predictive relationship when compared to the linear regression model for Bayesian crash rates (results in table 11), where the model that excluded all violations performed similarly to models that included some significant violations. Whether modeling crash status (yes/no) or a crash rate, the predictive power of SMS violations is weak. When comparing the predictive power of the models that result from the model-building sample, once applied to the validation sample, there is a consistent picture regarding the model fit (see table 13). In particular, the model fit is generally poor according to the H-L value; the stepwise model tends to perform better according to the AIC, but the ROC, adjusted R2, and percent discordant do not indicate the models have a strong ability to discriminate and predict future crashes. Classification tables that result from evaluating the model-building sample models, but estimated from the validation sample, generally resulted in similar results to those presented in table 10. The predictive power observed in these modeling and sensitivity analyses indicates that SMS may be less precise than what is reported and that the available information on violations is limited for the purpose of scoring carriers or predicting their crash risk. Regardless of which type of model we fit, we see that the predictive power of our models is low, and the use of the SMS violations in predicting future crashes is not very precise. The number of stable and significant effects across the various model-fitting scenarios that include violations is small. For the about 800 violations in SMS, only around 160 met the basic criteria of non-zero variance and non-zero counts for at least 1 percent of the sample. Of these, only two violations (speeding and failure to wear a seatbelt while operating a CMV) consistently appeared as a stable predictor of crashes, regardless of data and model. While some other violations appeared in models, only 13 were significant and stable in at least half of the models, most were significant in no more than half the models examined, and most often in fewer than 5 of the models. The results did not vary substantially according to whether observed versus Bayesian violation rates, crash versus Bayesian crash rates, or restricted data (carriers with more than 20 vehicles) versus full data were used to estimate crashes. Therefore the modeling attempts did not overcome the issues that result from small exposures. The results were generally confirmed when evaluated on a validation sample, indicating the future prediction is stable, yet not strong. Ultimately, much of the variance in crash predictions remains unexplained, regardless of the model and model-building data, so that the SMS might be less precise when the objective is to predict crashes. This appendix provides additional information and illustrations of the distribution of motor carrier population included in our analysis such as carrier size, number of crashes, inspections, and high risk status (see table 14). It also provides results of our analysis on the number and percentage of carriers above or below intervention thresholds, as well as the frequency and rate of crashes for each of those groups of carriers within each BASIC using FMCSA’s methodology and the illustrative alternative methodology (i.e., using a stronger data sufficiency standard) demonstrated earlier in the report. In addition, this appendix provides summary statistics of the various motor carrier populations used in FMCSA and GAO analysis. These statistics include, among other things, the numbers of carriers with an SMS score (i.e., “measure”) and the number of carriers above an intervention threshold in at least one BASIC. Finally, this appendix provides the complete graphical results of our analysis of FMCSA’s violation rates, safety event groups, and distribution of SMS scores for carriers above FMCSA’s intervention threshold using FMCSA’s methodology. Table 15 contains the results of our analysis using FMCSA’s SMS 3.0 methodology. This analysis calculated the number and percentage of carriers above and below intervention thresholds for each BASIC using carrier data from December 2007 through December 2009, and determined which carriers subsequently crashed during the 18-month evaluation period, December 2009 through June 2011. The analysis also presents aggregate crash rates for comparison purposes. Table 16 contains the results of our analysis using an illustrative alternative incorporating a stronger data sufficiency standard, among other things, as described elsewhere in this report (e.g. carriers with 20 or more inspections or 20 or more vehicles, depending upon the BASIC). As in the previous table, this analysis calculated the number of carriers above and below intervention thresholds for each BASIC using carrier data from December 2007 through December 2009, and determined which carriers subsequently crashed during the subsequent 18-month period, December 2009 through June 2011. The analysis also presents aggregate crash rates for comparison purposes. Table 17 contains selected SMS outcomes based on results reported by FMCSA’s and from GAO’s analysis. The following figures are graphical results of our analysis of the average and range of violation rates for carriers, percentage of carriers above FMCSA’s intervention thresholds for various safety event group categories, and distribution of SMS scores for carriers above FMCSA’s intervention thresholds using FMCSA’s methodology as discussed in the body of this report above. Figures 10 through 16 contain the average and range of violation rates for all carriers (where a violation rate could be calculated) by carrier size, for all the BASICS. Figures 17 through 25 contain the percentage of carriers above intervention thresholds within safety event groups for each BASIC. Finally, figures 26 through 32 show the distribution of carriers above intervention thresholds for each BASIC by carrier size. In addition to the individual named above, H. Brandon Haller, Assistant Director, Russell Burnett, Melinda Cordero, Jennifer DuBord, Colin Fallon, David Hooper, Matthew LaTour, Grant Mallie, Jeff Tessin, Sonya Vartivarian, and Joshua Ormond made key contributions to this report.
From 2009 to 2012, large commercial trucks and buses have averaged about 125,000 crashes per year, with about 78,000 injuries and over 4,100 fatalities. In 2010, FMCSA replaced its tool for identifying the riskiest carriers--SafeStat--with the CSA program. CSA is intended to reduce the number of motor carrier crashes by better targeting the highest risk carriers using information from roadside inspections and crash investigations. CSA includes SMS, a data-driven approach for identifying motor carriers at risk of causing a crash. GAO was directed by the Consolidated Appropriations Act of 2012 to monitor the implementation of CSA. This report examines the effectiveness of the CSA program in assessing safety risk for motor carriers. GAO spoke with FMCSA officials and stakeholders to understand SMS. Using FMCSA's data, GAO replicated FMCSA's method for calculating SMS scores and assessed the effect of changes--such as stronger data-sufficiency standards--on the scores. GAO also evaluated SMS's ability to predict crashes. The Federal Motor Carrier Safety Administration's (FMCSA) Compliance, Safety, Accountability (CSA) program has helped the agency contact or investigate more motor carrier companies that own commercial trucks and buses and has provided a range of safety benefits to safety officials, law enforcement, and the industry than the previous approach, SafeStat. Specifically, from fiscal year 2007 to fiscal year 2012, FMCSA more than doubled its number of annual interventions, largely by sending warning letters to riskier carriers. A key component of CSA--the Safety Measurement System (SMS)--uses carrier performance data collected from roadside inspections or crash investigations to identify high risk carriers for intervention by analyzing relative safety scores in various categories, including Unsafe Driving and Vehicle Maintenance. FMCSA faces at least two challenges in reliably assessing safety risk for the majority of carriers. First, for SMS to be effective in identifying carriers more likely to crash, the violations that FMCSA uses to calculate SMS scores should have a strong predictive relationship with crashes. However, based on GAO's analysis of available information, most regulations used to calculate SMS scores are not violated often enough to strongly associate them with crash risk for individual carriers. Second, most carriers lack sufficient safety performance data to ensure that FMCSA can reliably compare them with other carriers. To produce an SMS score, FMCSA calculates violation rates for each carrier and then compares these rates to other carriers. Most carriers operate few vehicles and are inspected infrequently, providing insufficient information to produce reliable SMS scores. FMCSA acknowledges that violation rates are less precise for carriers with little information, but its methods do not fully address this limitation. For example, FMCSA requires a minimum level of information for a carrier to receive an SMS score; however, this requirement is not strong enough to produce sufficiently reliable scores. As a result, GAO found that FMCSA identified many carriers as high risk that were not later involved in a crash, potentially causing FMCSA to miss opportunities to intervene with carriers that were involved in crashes. FMCSA's methodology is limited because of insufficient information, which reduces the precision of SMS scores. GAO found that by scoring only carriers with more information, FMCSA could better identify high risk carriers likely to be involved in crashes. This illustrative approach involves trade-offs; it would assign SMS scores to fewer carriers, but these scores would generally be more reliable and thus more useful in targeting FMCSA's scarce resources. In addition to using SMS scores to prioritize carriers for intervention, FMCSA reports these scores publicly and is considering using a carrier's performance information to determine its fitness to operate. Given the limitations with safety performance information, determining the appropriate amount of information needed to assess a carrier requires consideration of how reliable and precise the scores need to be for the purposes for which they are used. Ultimately, the mission of FMCSA is to reduce crashes, injuries, and fatalities. GAO continues to believe a data-driven, risk-based approach holds promise; however, revising the SMS methodology would help FMCSA better focus intervention resources where they can have the greatest impact on achieving this goal. GAO recommends that FMCSA revise the SMS methodology to better account for limitations in drawing comparisons of safety performance information across carriers. In addition, determination of a carrier's fitness to operate should account for limitations in available performance information. In response to comments from the Department of Transportation (USDOT), GAO clarified one of the recommendations. USDOT agreed to consider the recommendations.
In July 1997, the Secretary General proposed a broad reform program to focus the United Nations on achieving results as it carried out its mandates. These reforms included restructuring U.N. leadership and operations, developing a human capital system based on results, and introducing a performance-based programming and budgeting process. Although the Secretary General does not have direct authority over specialized agencies and many funds and programs, changes at the Secretariat were intended to serve as a model for reforms throughout the U.N. system. The Secretary General launched a second round of reforms in 2002 that expanded on the 1997 initiatives and reflected new areas of focus, such as public information activities and the human rights program. The overall goal was to align U.N. activities with the priorities defined by the Millennium Declaration and the new security environment. The 1997 and 2002 initiatives followed several efforts to reform the United Nations that began soon after its creation in 1945. Despite periodic cycles of reform, U.N. member states have continued to have concerns about inefficient operations; problems of fragmentation, duplication, and poor coordination; and the proliferation of mandates. These calls have also highlighted the need for more accountable leadership and improvement in key management practices. As the largest financial contributor to the United Nations, the United States has a strong interest in the completion of these reforms and has played a significant role in promoting financial, administrative, and programmatic changes. The State Department and the U.S. Permanent Mission to the United Nations continue to promote further reforms and report on the status of major reform initiatives to the U.S. Congress. The call for reforms has also grown as a result of problems identified in the United Nations’ management of the Oil for Food program. Last year we reported that the former Iraqi government obtained $10.1 billion through oil smuggling and illicit commissions and surcharges on commodity and oil contracts. The Iraq Survey Group, responsible for investigating Iraq’s activities in developing weapons of mass destruction, estimated illicit revenues at $10.9 billion and found similar irregularities in contract overpricing and surcharges. In April 2004, the Secretary General established the U.N. Independent Inquiry Committee (IIC) to investigate allegations of mismanagement and misconduct within the Oil for Food program. In February 2005, the IIC issued an interim report on the initial procurement of U.N. contractors, recipients of oil allocations, internal audit structure and activities, and management of administrative expenses. The Committee offered numerous recommendations for improving the United Nations’ internal audit function. Sustained oversight at all levels of the organization is needed for the United Nations to advance its reform agenda and achieve lasting results. The United Nations had completed 51 percent of its 1997 and 2002 reform initiatives. However, it has not periodically conducted comprehensive assessments to determine the status and impact of the reforms. Consequently, the Secretariat could not determine if it was meeting the Secretary General’s overall reform goals. The Secretary General launched two major reform initiatives, in 1997 and 2002, to address the United Nation’s core management challenges—poor leadership of the Secretariat, duplication among its many offices and programs, and the lack of accountability for staff performance. In assessing the status of these reforms, we found that the United Nations had made some progress in implementing these initiatives, putting in place 51 percent of all reforms. We found that 60 percent of the 88 reform initiatives in the 1997 agenda and 38 percent of the 66 reforms in the 2002 agenda were in place. The 1997 agenda consisted of initiatives that the Secretary General could implement on his own authority and those that required member states’ approval. The implementation of reforms under the Secretary General’s authority advanced more quickly than those under the authority of member states. We found that 70 percent of reform initiatives under the Secretary General’s authority were in place, compared with 44 percent of the initiatives requiring member state approval. Delays in acquiring member state approval are due, in part, to the longer time needed for the General Assembly to reach agreement from the majority. In addition, many reform efforts comprise only the first step in achieving longer-term goals. More than one-quarter of the Secretary General’s completed reforms in both the 1997 and 2002 agendas consisted of developing a written plan or establishing a new office. Although the establishment of a new office or department—such as the office to manage the U.N.’s interrelated programs to combat crime, drugs, and terrorism—can be counted as a completed reform, it is the office’s performance in meeting its objectives that will determine its impact and the extent to which it contributes to the Secretary General’s overall reform goals. We also reported that the Secretariat had not conducted systematic, comprehensive assessments of the status and impact of the Secretary General’s 1997 and 2002 reform initiatives. Without such assessments, the Secretariat was not able to determine what progress had been made and where further improvements were needed. Individual departments and offices within the Secretariat tracked reforms that related to their specific area of work. OIOS also monitored and evaluated the impact of selected reforms but was not responsible for overseeing the implementation of the overall reform agendas. In addition, the Deputy Secretary General, who is responsible for overseeing the overall reform process, neither systematically assessed departments’ performance in implementing reforms nor held managers directly accountable. The office of the Deputy Secretary General had only one full-time professional staff member dedicated to reform issues. In 1998 and 2003, the Secretary General issued status reports on the 1997 and 2002 reforms, respectively. These reports did not cover all of the initiatives in the respective reform plans or include comprehensive assessments of the reforms. In February 2005, we contacted the Office of the Deputy Secretary General to determine recent actions it has taken to report on the status and impact of the Secretary General’s reform initiatives. An official stated that the office has conducted an internal assessment but has not released this document to member states. The Secretary General announced his intention to submit additional reform proposals to improve the organization’s transparency and accountability before a September 2005 summit of world leaders. Holding staff accountable for implementing these reforms and measuring their impact is difficult without regular, comprehensive reports on the overall status and impact of reform initiatives. Adopting key practices in management, oversight, and accountability for reforms, such as systematic monitoring and evaluation, could facilitate the achievement of the Secretary General’s overall reform goals. At the program level, management reviews that compare actual performance to expected outcomes are critical elements of effective oversight and accountability. The United Nations has completed the initial phase of implementing reforms in a key area—performance-based budgeting. It adopted a budget that reflects a result-based budgeting format, including specific program costs, objectives, expected results, and performance indicators to measure results. However, it has yet to develop a system to regularly monitor and evaluate program results to shift resources to more effective programs. Program reviews that compare actual performance to expected outcomes are important to account for resources and achieve effective results. We reported in February 2004 report that the United Nations had begun to adopt a performance-based budgeting system. A performance-based budgeting framework includes three key elements: (1) a budget that reflects a budgeting structure based on results, linking budgeted activities to performance expectations; (2) a system to regularly monitor and evaluate the impact of programs; and (3) procedures to shift resources to meet program objectives. In December 2000, the Secretariat implemented the first key element of a performance-based budgeting framework by adopting a budget that reflects a results-based budgeting format, including specific program costs, objectives, expected results, and performance indicators to measure the results. For the first time, the 2004-2005 budget included specific performance targets and baseline data for many performance indicators that can help measure performance over time and allow program managers to compare actual achievements to expected results. However, oversight committees have reported that some programs still lacked clear and concise expected outcomes and performance indicators. Further, although the United Nations had developed measures for assessing program progress, many of these measures represent tasks and outputs rather than outcomes. For example, in 2003, a key objective of the peacekeeping operation in East Timor was to increase the capacity of the national police force to provide internal security. The indicator for measuring results was the number of police trained—a goal of 2,830 police by 2004. We reported, however, that the number of police trained did not reflect the quality of their training or whether they improved security in East Timor. The Secretariat had not systematically monitored and evaluated program impact or results—the second element of performance budgeting. In 2002, the Office of Internal Oversight Services (OIOS) found that nearly half of U.N. program managers did not comply with U.N. regulations to regularly monitor and evaluate program performance. Program managers were not held accountable for meeting program objectives because U.N. regulations prevented linking program effectiveness and impact with program managers’ performance. OIOS did not provide statistics on the number or percentage of program managers complying with U.N. regulations regarding monitoring and evaluation activities in its most recent report on the Secretariat’s evaluation efforts. However, OIOS reported that program managers did not develop comprehensive monitoring and evaluation plans in 12 out of 20 programs surveyed, and management review of evaluations was inconsistent among programs. OIOS also reported that, overall, evaluation findings were not used to improve program performance. In some cases, such as with the Office of the High Commissioner for Human Rights, monitoring and evaluation responsibilities were assigned to low-level staff with minimal oversight from program managers. Further, for the majority of programs, no resources had been assessed or allocated for monitoring and evaluation activities. As a result, it is unlikely that the Secretariat will meet its goal of implementing a full performance-based budgeting system by 2006. The final component of performance budgeting—procedures to review evaluation results, eliminate obsolete programs, and shift resources to other programs—was not in place. The Advisory Committee on Administrative and Budgetary Questions reported in 2003 that it did not receive systematic information from the Secretariat on program impact and effectiveness to determine whether a program was meeting its expected results. In 2004, the Committee for Program and Coordination recommended that the Secretariat improve its monitoring and evaluation system to measure impact and report on results. In December 2003, the General Assembly approved the elimination of 912 of more than 50,000 outputs in the 2004-2005 program budget based on the Secretariat’s review of program activities. However, in 2003, the Advisory Committee on Administrative and Budgetary Questions and the Committee for Program and Coordination reported that many sections in the budget still lacked justifications for continuing certain outputs. The committees recommended that program managers in the Secretariat identify obsolete outputs in U.N. budgets in compliance with U.N. regulations so resources could be moved to new priority areas. Our February 2004 report contained recommendations to promote full implementation and accountability of the Secretary General’s overall actions. Specifically, we recommended that the United States work with other member states to encourage the Secretary General to (1) report regularly on the status and impact of the 1997 and 2002 reforms and other reform that may follow, (2) differentiate between short- and long-term goals and establish time frames for completion, and (3) conduct assessments of the financial and personnel implications needed to implement the reforms. In addition to a systematic monitoring and evaluation system, a strong internal audit and evaluation function can provide the independent assessments needed to help ensure oversight and accountability. OIOS provides this service through audits, evaluations, inspections, and investigations of U.N. funds and programs. This office provided detailed oversight of many aspects of the Oil for Food program, and its 58 reports point to the need for continued U.N. attention to management reforms. Specifically, reports by the internal auditors and the Independent Inquiry Commission revealed lax oversight of Oil for Food program contracts that resulted in repeated violations of procurement rules and weaknesses in contract management. In addition, constraints on the internal auditors’ scope and authority prevented the auditors from examining and reporting more widely on some critical areas of the Oil for Food program. U.N. oversight bodies did not obtain timely reporting on serious management problems and were unable to take corrective actions when needed. These constraints limited the internal audit unit’s effectiveness as an oversight tool. Our review of the OIOS audit reports of the Oil for Food program released in January 2005 identified 702 findings and 667 recommendations across numerous programs and sectors. OIOS found recurring problems in procurement, financial and asset management, personnel and staffing, project planning and coordination, security, and information technology. The findings in these audits, which were conducted from 1999 to 2004, suggested a lack of oversight and accountability by the offices and entities audited. In particular, we identified 219 findings and 212 recommendations related to procurement and contract management deficiencies. In February 2005, the IIC also reported that the initial procurement of three major Oil for Food contracts awarded in 1996 did not meet reasonable standards of fairness and transparency. The IIC reported that it will make recommendations concerning greater institutional transparency and accountability in a later report. OIOS also conducted audits of three key contracts for inspecting commodities coming into Iraq and for independent experts to monitor Iraq’s oil exports. OIOS’ findings in the management of two of these contracts supplemented the IIC’s information on the bidding and awarding process. The IIC found that the initial selection process did not conform to competitive bidding rules, while OIOS found lax oversight by the U.N. Office of the Iraq Program (OIP) over contractor performance. The IIC reviewed three major contracts awarded in 1996 to determine if their selections were free from improper influence and were conducted in accordance with U.N. regulations. These contracts were awarded to Lloyd’s Register Inspection Ltd. to inspect humanitarian goods coming into Iraq, Saybolt Eastern Hemisphere BV to inspect oil exported from Iraq, and Banque National de Paris to maintain revenues from Iraqi oil sales. In its February 2005 report, the IIC found that the United Nations initiated expedited competitive bidding processes for both the humanitarian goods and oil inspection contracts. The IIC concluded that, during the bid process, the U.N. Iraq Steering Committee and the Chief of the Sanctions Branch prejudiced and preempted the competitive process by rejecting the lowest qualified bidder in favor of an award to Lloyd’s Register. The IIC found that the regular bidding process was tainted when the branch chief provided a diplomat from the United Kingdom with insider information on the bid amount that Lloyd’s Register needed to win the contract. Similarly, the IIC found that a U.N. procurement officer allowed Saybolt to amend its bid to become the lowest bidder. The IIC characterized the bidding process for this contract as neither fair nor transparent. The IIC also found irregularities in the award of a contract to Banque National de Paris. The decision did not conform to the U.N. requirement to award contracts to the lowest acceptable bidder, and no official justified the rejection of the lowest acceptable bidder in writing, as required by U.N. regulations. OIOS conducted audits of the Lloyd’s Register and Saybolt contracts as well as the contract to Cotecna Inspection SA, the company that succeeded Lloyd’s Register for the inspection of humanitarian goods. In a July 1999 audit of the Lloyd’s Register contract, OIOS found contractor overcharges, unverified invoices, violations of procurement regulations, and limited U.N. oversight. For example, while the contract allowed the United Nations to inspect and test all contractor services, the auditors found that OIP had received, certified, and approved the contractor’s invoices without on-site verification or inspection reports. In responding to the auditors findings, OIP rejected the call for on-site inspections and stated that any dissatisfaction with the contractor’s services should come from the suppliers or their home countries. A July 2002 audit of Saybolt’s operation found similar problems, including inadequate documentation for contractor charges and payments made for equipment already included in the contractor’s daily staff cost structure. As with the Lloyd’s Register contract, OIOS found that OIP officials charged with monitoring the Saybolt contract had made no inspection visits to Iraq but had certified the contractor’s satisfactory compliance with the contract and approved extensions to the contract. In an April 2003 report, OIOS cited concerns about amendments and extensions to Cotecna’s original $4.9 million contract. Specifically, OIOS found that OIP increased Cotecna’s contract by $356,000 4 days after the contract was signed. The amendment included additional costs for communication equipment and operations that OIOS asserted were included in the original contract. In addition, OIOS found that the contract equaled the offer of the second lowest bidder through amendments and extensions during the contract’s first year. Accordingly, OIOS concluded that, one year after the start of the contract, the reason for awarding the contract to Cotecna—on the grounds that it was the lowest bidder—was no longer valid. In addition to the three inspection contracts, OIOS reported procurement weaknesses in other areas of the Oil for Food program. For example, in November 2002, OIOS reported that almost $38 million in procurement of equipment for the U.N.-Habitat program was not based on a needs assessment. As a result, 51 generators went unused from September 2000 to March 2002, and 12 generators meant for project-related activities were converted to office use. OIOS further reported that 11 purchase orders totaling almost $14 million showed no documentary evidence supporting the requisitions. In 1994, the General Assembly established OIOS to conduct audits, evaluations, inspections, and investigations of U.N. programs and funds. Its mandate reflects many characteristics of U.S. inspector general offices in purpose, authority, and budget. For example, OIOS staff have access to all U.N. records, documents, or other material assets necessary to fulfill their responsibilities. We reported in 1997 that OIOS was in a position to be operationally independent, had overcome certain start-up problems, and had developed policies and procedures for much of its work. We could not test whether OIOS exercised its authority and implemented its procedures in an independent manner because OIOS did not provide us with access to certain audit and investigation reports and its working papers. However, we concluded that OIOS could do more to help ensure that the information it presents, the conclusions it reaches, and the recommendations it makes can be relied upon as fair, accurate, and balanced. The IIC also made a number of recommendations in January 2005 to help provide OIOS’ audit division with the mandate, structure, and support it needs to operate effectively. The IIC found a need for greater reporting and budgetary independence for OIOS and its internal audit division. This division has two funding sources: (1) the U.N. regular budget, which covers normal, recurring audit activities; and (2) extra-budgetary funds allocated outside the U.N. regular budget, which cover audits of special non-recurring funds and programs, such as the Oil for Food program. OIOS’ internal audit division received extra-budgetary funds directly from the Oil for Food program managers it audited. It assigned 2 to 6 auditors to cover the program. The IIC found that this level of staffing was low compared to OIOS’ oversight of peacekeeping operations and to levels recommended by the U.N. Board of Auditors. The IIC found that the practice of allowing executive directors of funds and programs the right to approve the budgets and staffing of internal audit activities can lead to critical and high risk areas being excluded from internal audit examination and review by oversight bodies. For example: Since its inception, OIOS has generally submitted its audit reports only to the head of the audited agency. However, in August 2000 OIOS tried to widen its report distribution by sending its Oil for Food reports to the Security Council. However, the OIP director opposed this proposal, stating that it would compromise the division of responsibility between internal and external audit. The Deputy Secretary General also denied the request, and OIOS subsequently abandoned any efforts to report directly to the Security Council. OIOS did not examine OIP’s oversight of the contracts for humanitarian goods in central and southern Iraq that accounted for almost $40 billion in Oil for Food proceeds. OIP was responsible for examining these contracts for price and value at its New York headquarters. The Iraqi government’s ability to negotiate contracts directly with commodity suppliers was an important factor in enabling Iraq to levy illegal commissions. OIOS believed that these contracts were outside its purview because the Security Council’s sanctions committee was responsible for their approval. However, OIP management also steered OIOS toward program activities in Iraq rather than headquarters functions where OIP reviewed the humanitarian contracts. In May 2002, OIP’s executive director did not approve the auditors’ request to conduct a risk assessment of OIP’s Program Management Division, citing financial reasons. We reported last year that it was unclear how certain entities involved in the Oil for Food program, including OIP, exercised their oversight responsibilities over humanitarian contracts and sanctions compliance by member states. Such an assessment might have clarified OIP’s oversight role and the actions it was taking to carry out its management responsibilities. In 2002, the U.N. Compensation Commission challenged OIOS’ audit authority. In its legal opinion, the U.N. Office of Legal Affairs noted that the audit authority extended to computing the amounts of compensation but did not extend to reviewing those aspects of the panels’ work that constitute a legal process. However, OIOS disputed the legal opinion, noting that its mandate was to review and appraise the use of U.N. financial resources. OIOS believed that the opinion would effectively restrict any meaningful audit of the claims process. OIOS identified more than $500 million in potential overpayments by the Commission. However, as a result of the legal opinion, the Commission did not respond to many OIOS observations and recommendations, considering them beyond the scope of an audit. Constraints on the internal auditors’ scope and authority prevented the auditors from examining and reporting more widely on problem areas in the Oil for Food program. These limitations hampered the auditors’ coverage of the Oil for Food program and its effectiveness as an oversight tool. U.N. oversight bodies did not obtain timely reporting on serious management problems and were unable to take corrective actions when needed. However, in December 2004, the General Assembly required OIOS to include in its annual and semi-annual reports titles and brief summaries of all OIOS reports issued during the reporting period and to provide member states with access to original versions of OIOS reports upon request. The IIC also recommended that OIOS and its internal audit division directly report to a non-executive board and that budgets and staffing levels for all audit activities be submitted to the General Assembly and endorsed by an independent board. The Secretary General’s announcement that he intends to offer a U.N. reform agenda in September 2005 offers the United Nations an opportunity to take a more strategic approach to management reform. A systematic review of the status of the 154 reforms begun in 1997 and 2002 and information from the Oil for Food program would allow the Secretary General to develop a comprehensive, prioritized agenda for continued U.N. reform. We also encourage continued attention to our February 2004 recommendation that the United States work with other member states to encourage the Secretary General to report regularly on the status of reform efforts, prioritize short- and long-term goals, and establish time frames to complete reforms. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or the other Subcommittee members may have. For further information, please contact Joseph A. Christoff on (202) 512- 8979. Individuals making key contributions to this testimony and the reports on which it was based are Phyllis Anderson, Leland Cogliani, Lynn Cothern, Katie Hartsburg, Jeremy Latimer, Tetsuo Miyabara, Michael Rohrback, and Audrey Solis. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The U.N. regular budget for the 2004-2005 biennium exceeded $3 billion for the first time. In light of the organization's increasing demands, the U.N. Secretary General and member states have called on the Secretariat to better define priorities and eliminate outdated activities. In response, the Secretary General launched major reform initiatives in 1997 and 2002, and we reported on the status of these efforts in February 2004. Audits and investigations of the U.N. Oil for Food program have also brought attention to recurring management weaknesses. As the largest financial contributor to the United Nations, the United States has a strong interest in the completion of the Secretary General's reforms. GAO provides observations on areas for U.N. reform based on our 2004 report and our continuing review of the Oil for Food program, including our analysis of internal audit reports and other documents. The United Nations needs sustained oversight at all levels of the organization to achieve lasting results on its reform agenda. We reported in 2004 that the Secretariat had made progress in implementing 51 percent of the Secretary General's 1997 and 2002 management reform initiatives. However, we found that more than one-quarter of the completed reforms only consisted of developing plans or establishing new offices--the first steps in achieving longer term reform goals. In addition, the Secretariat had not periodically conducted comprehensive assessments of the status and impact of its reforms. Accordingly, the Secretariat had not been able to determine what progress had been made or where future improvements were needed. A t the program level, management reviews that compare actual performance to expected results are critical elements of effective oversight and accountability. The United Nations has completed the initial phase of implementing reforms in a key area--performance-based budgeting. It adopted a budget that reflects a result-based budgeting format, including specific program costs, objectives, expected results, and performance indicators to measure results. However, the United Nations has yet to implement the next critical step in performance-based budgeting--a system to monitor and evaluate program impact or results. Program reviews that compare actual performance to expected outcomes are important for accounting for resources and achieving effective results. A strong internal audit function provides additional oversight and accountability through independent assessments of U.N. activities, as demonstrated by audits of the U.N Oil for Food program. U.N. internal auditors found recurring management weaknesses in 58 audits it conducted over 5 years. However, constraints on their scope and authority prevented the auditors from examining and reporting widely on problems in the Oil for Food program. U.N. oversight bodies did not obtain timely reporting on serious management problems and were unable to take corrective actions when needed. These constraints limited the internal audit unit's effectiveness as an oversight tool. GAO plans to conduct more detailed work on the role of the internal auditors in upcoming engagements.
Under the Clean Air Act, EPA establishes health-based air quality standards that the states must meet and regulates air pollutant emissions from various sources. These include industrial facilities and mobile sources, such as automobiles and other transportation. EPA has issued air quality standards for six primary pollutants—carbon monoxide, lead, nitrogen oxides, ozone, particulate matter, and sulfur dioxide—that have been linked to a variety of health problems. For example, ozone can inflame lung tissue and increase susceptibility to bronchitis and pneumonia. In addition, nitrogen oxides and sulfur dioxide contribute to the formation of fine particles that have been linked to aggravated asthma, chronic bronchitis, and premature death. In 2002, the most recent year for which data were available, 146 million Americans lived in areas that failed to meet at least one air quality standard, according to EPA. Subject to EPA’s oversight, state and local air quality agencies generally administer the NSR program and operate under one of two arrangements. First, some agencies located in areas that meet air quality standards have “delegation” agreements with EPA under which they implement the NSR program contained in EPA’s regulations. Under the second arrangement, agencies design their own programs by incorporating all of their air quality regulations, including federal requirements, into overall air quality plans, known as state implementation plans. They update these plans periodically and submit them to EPA for approval. In addition, the Clean Air Act requires those agencies that implement their own air quality programs to ensure their requirements are at least as stringent as EPA’s regulations. State and local agencies may also supplement the federal NSR program with additional requirements. However, some jurisdictions have laws or policies that prevent agencies from implementing more stringent regulations. Throughout its history, the NSR program has been characterized by complexity and controversy, involving disputes between EPA and industry about, among other things, whether certain facility changes qualified for the routine maintenance, repair, and replacement exclusion. In recent years, EPA has taken enforcement action against companies in several industries, including some electricity producers, forest product manufacturers, and petroleum refineries, alleging noncompliance with the program. In addition to concerns about enforcement related issues, some industry representatives have also raised concerns that the time required to obtain a NSR permit and the cost of installing controls have prevented facilities from making changes that enhance energy efficiency and reduce air emissions, such as modifying a boiler so that it produces the same amount of energy with less fuel. In May 2001, the Vice President’s National Energy Policy Development Group recommended, among other things, that the Administrator of the EPA, in consultation with the Secretary of Energy and other federal agencies, examine the impact of the NSR program on investments in new utility and refinery generation capacity, on energy efficiency, and on environmental protection. In its June 2002 NSR Report to the President, EPA concluded, among other things, that the program had not affected investments in new power plants and refineries but had discouraged some energy efficiency projects at existing facilities, including some that would have reduced air emissions. This report also contained recommendations for revising the program. Subsequently, EPA issued a final rule on December 31, 2002, which contained five provisions, identified in table 1, which exempt certain facility changes from requirements to obtain NSR permits. These revisions have been the subject of congressional debate. For example, in 2002, the Congress held hearings during which members of the Congress, EPA officials, and a number of stakeholders—including industry, states, and environmental groups—presented their positions on the revisions. Also, legislation has been introduced in the Congress that seeks to further regulate emissions from industrial facilities. In addition, a number of environmental and public health groups, as well as a group of states primarily from the Mid-Atlantic and Northeast, claimed that the December 2002 final rule violated the Clean Air Act and asked EPA to reconsider several aspects of the rule. In July 2003, EPA agreed to do so and then solicited public comment on the areas under reconsideration. Based on this input, EPA announced at the end of October 2003 that it would make several technical changes to the rule. State and local agencies that operate under delegation agreements were required to have implemented the December 2002 rule by March 2003 or return responsibility for implementing the rule to EPA, while those operating under state implementation plans have until January 2006 to revise their regulations accordingly. As for the December 2002 proposed provisions—that would further specify what facility changes are exempt from NSR requirements under the routine maintenance, repair, or replacement exclusions—a coalition of primarily Mid-Atlantic and Northeastern states and environmental and public health groups challenged the legality of the equipment replacement rule in court after it was finalized in October 2003. State and local agencies that operate under delegation agreements were required to implement this rule by December 26, 2003 or have EPA implement it for them, while those operating under state implementation plans have until October 2006 to revise their regulations accordingly. However, on December 24, 2003, the U.S. Court of Appeals for the District of Columbia Circuit stayed the equipment replacement rule pending further review, preventing the rule from going into effect while the court considers the legal challenges. EPA has not determined what additional action, if any, it will take regarding establishing an annual maintenance allowance below which facility changes would be considered exempt from NSR requirements. A majority of the state officials expect that the December 2002 final rule will provide industry with greater flexibility to make facility changes without triggering NSR requirements for permits. However, a majority of the officials also expect that the rule will lead to an overall increase in emissions of harmful air pollutants and hinder efforts to meet air quality standards, potentially creating or exacerbating risk to public health. Most of the officials also expect that the rule will increase their agencies’ workload. In pursuing the December 2002 final rule, EPA, among other things, sought to offer facilities greater flexibility to improve and modernize their operations. Similarly, the state air quality agency officials (29 of 44) said that in their professional opinion, obtaining fewer permits and more flexibility to modify facilities are the rule’s two primary positive effects for industry. For example, more than half of the state officials believe that four of the five provisions of the rule—including the revised test for determining whether a facility modification significantly increases net emissions and is, therefore, subject to NSR—will decrease the number of permits state air quality agencies issue. For perspective, the state officials reported that their agencies had issued a total of 600 NSR permits during the 3 years prior to the final rule to companies that were modifying existing facilities. The officials expect the number of such permits issued to decrease under the final rule because it expands the range of activities that companies may pursue without a permit and, in some cases, controls. Forty of the state officials identified the requirements to install pollution controls as one of the best features of the NSR program prior to the final rule. According to EPA, however, several provisions of the rule require companies to make certain commitments, such as accepting an overall limit on their emissions, in exchange for avoiding permitting. Therefore, EPA believes the rule will encourage investments that decrease emissions. As we reported in our August 2003 report, EPA found that the December 2002 final rule would lead to overall benefits by encouraging energy efficiency projects, reducing emissions and related health risks, and providing economic benefits to companies affected by the program. For example, EPA’s analysis found that the rule would encourage companies to implement energy efficiency projects that would reduce emissions, such as upgrades to boilers used to generate power. However, only 9 of the 44 officials we surveyed anticipated that the rule would provide the impetus for companies to increase these projects. A majority of the state officials expect emissions to increase as a result of the final rule—in contrast to EPA’s conclusion, in the agency’s analysis of the rule’s environmental effects, that it will reduce emissions from industrial facilities. More specifically, 27 of the 44 officials we surveyed expect that overall, the December 2002 rule will increase emissions; 8 officials believe emissions will decrease or remain the same (the remaining 9 officials could not judge the emissions impact). At least half of the officials thought the rule would increase emissions of carbon monoxide, nitrogen dioxide, ozone, particulate matter, or sulfur dioxide— all of which have been linked to health problems and are controlled by a variety of Clean Air Act programs. When asked about the emissions impact of each specific provision in the final rule, a majority of the state officials identified two of the rule’s provisions as most likely to cause emissions increases, as table 2 illustrates. These include the revised methods for determining (1) a facility’s historical or “baseline” emissions and (2) whether a change will result in a significant net emissions increase. For example, a majority of the officials believe the “baseline” provision will increase emissions. This provision allows industrial facilities to use any consecutive 24-month period in the previous decade as a baseline. EPA changed this emissions calculation method to, among other things, account for variations in business cycles. The agency concluded that this provision would have negligible emissions consequences because it would not alter the baseline for most facilities, including coal-fired power plants (the largest emitting group of facilities). In addition, companies must adjust their baselines downward to reflect any other emissions limitations that have become effective since the period of time they selected for establishing their baseline, according to EPA. EPA program managers, therefore, maintain that emissions baselines will not significantly increase as a result of this provision. Nevertheless, some officials provided written responses to our survey describing their concerns over this provision. Several such officials asserted that it allows companies to select the 24-month period within the previous 10 years in which their emissions were highest. In addition, 24 officials thought that the provision for plantwide emissions limits, whereby facilities accept a cap on their overall emissions to avoid undergoing NSR, would nevertheless increase emissions. For example, several officials said that the rule enables facilities to establish their emissions cap based on their highest 2 years of emissions in the previous 10 years, thereby enabling them to create a cap that exceeds their current emissions. On the other hand, 10 officials said this provision would decrease emissions, and several asserted that it creates incentives for facilities to reduce or limit their emissions. EPA program managers maintain that this provision will decrease emissions. In addition to these overall effects, 24 of the state officials anticipate that the rule will particularly allow facilities built prior to the establishment of the NSR program in 1977 to increase their emissions. At the time, the Congress decided to allow existing facilities to defer installation of pollution controls until a major modification was made with the expectation that, over time, all facilities would install such equipment, and this would lead to lower overall emissions. However, as we concluded in our June 2002 report on emissions from older power plants, taken as a whole, such plants still emit more air pollution for each unit of electricity generated than newer plants. For example, we found that for each megawatt of electricity produced, the older facilities emitted about 100 percent more sulfur dioxide and 25 percent more nitrogen oxides than newer facilities. State officials that believe emissions increases will occur under the rule gave various opinions as to how they would manage such increases. For example, 7 officials said that the rule would not impede their ability to meet or maintain air quality standards. Another 14 expect they will offset the anticipated increases using other air quality regulations, such as those used to control emissions from mobile sources (automobiles and other transportation). However, 13 others expect the rule to impede their ability to meet or maintain standards—despite these other regulations. (Nine said they could not judge the rule’s effects.) This could create challenges for agencies that expect the rule to interfere with efforts to meet air quality standards, but that said they were prohibited from adopting more stringent regulations, such as the District of Columbia, Kentucky, New Jersey, New Mexico, Oklahoma, Pennsylvania, and Wisconsin. On the other hand, 28 state officials said that state law or policy does not prohibit them from adopting more stringent rules than federal requirements. A majority of the state officials’ responses contrasted with EPA’s statement that the final rule would provide greater certainty than in the past for companies and regulators when determining when NSR requirements apply. Officials identified this uncertainty as one of the program’s main problems before the rule, and 30 officials identified continued uncertainty as the rule’s greatest negative impact on state agencies and industry. More specifically, one official explained that the rule is too vague to be implemented with certainty or enforced. Another state official said that the rule’s new method for determining whether a facility modification would significantly increase its emissions is by far the most complicated process yet devised for making such determinations. Furthermore, the official stated that a company trying to do the right thing could easily be confused when attempting to determine its future levels of emissions. This confusion could increase both the burden that the rule imposes on state agencies to implement and enforce it and the costs for companies that want to use its provisions. In addition, 30 of the state officials said that the final rule did not resolve any of the other significant problems with the program, including difficulty in determining the stringency of pollution controls that facilities should install when required to do so. The state officials’ survey responses showed that many expect the final rule to impose demands on their agencies, including increased workloads. This comes at a time when many states face budget deficits. Nevertheless, many of the state officials said their agencies plan to adopt all or most of the rule’s provisions as written (see table 3). In revising state programs to incorporate the rule, 31 of the officials said that it would take between one and four staff to adopt the rule’s provisions and obtain EPA’s approval of their proposed implementation plans. Seventeen of the 36 officials that were able to anticipate the staff needed said that their agency had a plan for obtaining the necessary staff time, but 15 others did not (4 said they did not know if their agency had a plan). In addition, 30 state officials expect that having to administer the rule after it is adopted will increase their workload at least to some extent—despite the fact that most expect a decrease in the number of permits issued in the future. Another 6 expect a decrease in their workload, and 1 expected no change. As noted above, most officials expected continued uncertainty for state agencies as a negative impact of the rule. One official explained that the state agency was spending considerable time learning the regulations and training agency staff and companies, while also developing record keeping, tracking, and other administrative processes. Another official expected a dramatic increase in the agency’s administrative workload, including time spent reviewing information associated with the rule’s provisions for plantwide limits, among other things. Similarly, another official expected a high demand among companies for plantwide limits and that developing them would be very resource intensive. However, another official said that the workload would increase initially because of the learning curve with the new program but then decrease over time. The remaining 7 officials did not know or had not assessed the rule’s workload impact. EPA program managers maintain that, over time, the rule will decrease the workload for agencies. To better understand and implement the rule, all but one of the agency officials said that they would benefit from some type of assistance from EPA, including updated guidance or workshops. Similar to their opinions on the final rule, a majority (28 of 42) of the state officials expected EPA’s two NSR revisions—as proposed in December 2002—to provide companies the flexibility to perform maintenance and replacement activities without obtaining permits and installing pollution controls. However, at least half of the officials also expected that, as a result, emissions would increase, and a third expected the exclusions would exacerbate existing air quality problems and health risks in areas that already do not meet standards. A majority also expected a greater administrative burden and uncertainty for agencies in determining when a facility’s activities can be excluded. Twenty-eight state officials expected the two exclusions would exempt facility changes from requirements for permits and controls, decreasing the number of permits they issue over the next 5 years. This would provide industry with greater flexibility to perform routine maintenance, repair, and replacement activities without incurring the costs and delays of the NSR program. EPA previously determined which activities were considered routine maintenance, repair, and replacement, and thus excluded from NSR, on a case-by-case basis. In December 2002, EPA proposed that, in addition to the case-by-case determination, exclusions could also be determined according to a cost threshold mechanism, below which activities could be exempted from NSR, rather than an emissions threshold. Although, at the time of our survey, the 20 percent cost threshold for replacing equipment had not been established, one official said that this exclusion would exempt most facility modifications from NSR. The state officials identified fewer permits and increased flexibility as the exclusions’ most positive benefits for companies. In addition, 19 of the officials expected that the exclusions would have a positive effect on companies’ efforts to pursue energy efficiency projects. These officials’ opinions are, therefore, consistent with EPA’s finding that the exclusions would remove barriers to energy efficiency investments. Overall, 21 of the 44 officials said they opposed the equipment replacement exclusion. Another 12 said they supported this provision, and the others said they neither supported nor opposed it, or had no opinion. One of the officials who expressed concerns about the proposal said that implementing the equipment replacement exclusion would reduce or eliminate incentives for companies to install well-controlled equipment. Another official expressed the concern that the exclusion did not include the necessary provisions to ensure that a company does not replace an entire emissions unit over a period of just a few years without installing controls. In addition, 32 of the 44 officials said they opposed the annual maintenance allowance exclusion. Another 3 officials said they supported this provision, and the others said they neither supported nor opposed it, or had no opinion. Specifically, some states were concerned that the financial analysis to evaluate the cost data to determine exclusions is too complex. One official asserted that the annual maintenance allowance would enable companies to conduct projects that are not routine, thereby extending the life of equipment that should have been upgraded with more efficient equipment. According to EPA, the agency received a mixture of positive and negative comments on the annual maintenance allowance approach from key stakeholders, including industry, state and local agencies, and environmental groups. The agency has not determined whether it will finalize this portion of the proposal or pursue other options to address routine maintenance activities. At least half of the state officials believed that the exclusions would result in increased emissions of harmful air pollutants. For example, half expected that the equipment replacement exclusion would increase emissions, and several believe the cost threshold mechanism will allow older facilities to avoid installing pollution controls. Only 2 officials thought that this exclusion would decrease emissions, while the others expected no change (7) or could not judge (12). Similarly, 26 of 42 officials who responded said they expected the proposed annual maintenance allowance exclusion would increase emissions. For example, one official explained that because the exclusion is based solely on the amount of money spent without regard to emissions increases, facilities could make changes that increase emissions and be exempt from NSR. Only 1 official expected this exclusion would decrease emissions, while the others expected no change (1) or could not judge (14). Overall, 21 of the 44 state officials believed the two exclusions would enable older facilities, built prior to 1977, to increase emissions. Another 8 expected emissions to decrease or remain the same, and 15 were unable to judge. As discussed earlier, older power plants emit more pounds of pollutants per unit of energy generated than newer plants. One official said that older facilities would continue to be modified without going through NSR and upgrading their pollution controls. Another official said that enabling older power plants to avoid installing pollution controls violated the intent of the Clean Air Act. While at least half of the officials expected the exclusions to increase emissions, fewer expected them to exacerbate existing air quality problems or create new ones. For example, of the 30 officials located in states with areas that currently do not meet air quality standards, about a third expect the equipment replacement exclusion to interfere with areas’ efforts to meet standards, while another third did not expect it to interfere, and the final third could not judge. In addition, 13 of these officials expected the annual maintenance allowance exclusion to interfere, while 5 did not, and 12 could not judge. In terms of creating new air quality problems in areas that currently meet standards, only 5 of 44 officials expected the equipment replacement exclusion to have this impact, while 20 did not, and 19 could not judge. Furthermore, only 7 of these officials expected the annual maintenance allowance exclusion to have this impact, while 16 did not, and 21 could not judge. The opinions of officials that expect emissions increases and adverse air quality effects contrast with EPA’s conclusion that the exclusions would enhance the environmental protection and benefit derived from the program. In addition, EPA’s economic analysis of the exclusions found that they would lead to health benefits and did not account for any potential health-related costs. However, to the extent that either exclusion would cause or exacerbate violations of health-based air quality standards, EPA’s analysis would have underestimated the health effects and costs of the exclusions. A majority of the officials said that implementing the exclusions would increase their administrative burden (27 of 44) and create uncertainty for agencies in determining when a facility’s activities can be excluded (28 of 44). These opinions contrast with EPA’s conclusion in the analysis noted above that they would provide greater regulatory certainty. Several officials expressed concerns about the complex accounting procedures they would need to use to determine compliance with the cost threshold mechanisms and whether modifications could be excluded from NSR permitting. For example, one official said that the accounting procedures were well beyond the expertise of the state agency, and another official described how the agency would need to hire certified public accountants to determine compliance with the exclusions. According to key stakeholders we contacted, the proposed and final revisions to the NSR program would benefit industry by decreasing the regulatory burden on companies that modify their industrial facilities, but these stakeholders disagreed on the revisions’ impact on emissions and other factors. Stakeholders representing environmental and public health groups anticipated that the revisions would mean fewer modifications will be subject to NSR’s permit and control requirements, but more work for regulators as they look for alternative ways to control emissions. In contrast, stakeholders representing the industry groups asserted that the proposed and final changes clarified the NSR program, thereby making permitting easier, and encouraging investment in energy efficient projects that lower fuel consumption and emissions. As we concluded in our August 2003 report, the overall economic and environmental effects of the December 2002 rule are uncertain because of data limitations and difficulty determining how individual companies will respond to the rule. According to the opinions of the six environmental and public health group stakeholders we contacted, as well as an association representing all of the state and local air quality agencies, the proposed and final revisions would lessen the regulatory burden on companies because, as discussed earlier, fewer modifications would trigger NSR. Under the prior rules, to obtain a permit, a company would have to submit an application and go through a public notice and comment period—a process that could take 3 months to more than 1 year. The company would also have to periodically report on their compliance with the permit. Furthermore, in cases where the modification would significantly increase emissions, the company would have to go through the time and expense of installing emission controls. As a result of the NSR revisions, however, environmental and public health stakeholders anticipate that companies would forgo the emissions reductions that would have been achieved by installing controls, thereby increasing emissions and public health risks. As with a majority of the state air quality officials responding to our survey, nearly all of the environmental and public health group stakeholders asserted that the proposed and final revisions would create more work for state and local air quality agencies. Several of them believe that, because the revisions would result in fewer permits, they would also result in fewer recordkeeping and reporting requirements for industry. This, in turn, would make it harder for the agencies to track and monitor changes at facilities that could influence emissions. For example, according to the association representing these agencies, the revisions would make it difficult for them because they would now have to identify other sources of emissions information instead of relying on companies to report this information, as companies were previously required to do under the NSR program. We concluded in our October 2003 report that, overall, as a result of the final rule, the public may have less assurance that they will have notice of, and information about, company plans to modify facilities in ways that affect emissions, as well as less opportunity to provide input on these changes and verify they will not increase emissions. Some of the environmental and public health stakeholders also pointed out that the agencies will be forced to find programs other than the federal NSR program to control emissions so that local air quality meets the national standards. For example, areas not meeting at least one of the standards must develop a state plan showing how they will reduce emissions to comply with the standard. But with fewer modifications and facilities subject to emission controls through NSR, air quality agencies will have to look for other ways to reduce or control emissions. However, according to some environmental and public health groups, these alternative regulations and programs can be more difficult to implement because, for example, they focus on smaller sources of emissions compared with the sources subject to the federal NSR program. Therefore, to achieve the same emissions savings as they would have under NSR, the agencies will have to track emissions and pursue reductions from a greater number of sources, requiring more staff time and resources for permitting and enforcement. Most industry stakeholders we contacted felt the proposed and final revisions would lessen, or at least not increase, their regulatory burden, similar to the opinions of the environmental and public health stakeholders. Fewer modifications would be subject to the requirements to obtain a permit and install controls. Furthermore, several industry stakeholders said their regulatory burden would decrease because the revisions clarified when NSR actually applied. Several industry stakeholders explained that before the revisions, companies were uncertain as to whether some of their modifications triggered NSR. For example, one stakeholder said that the existing routine maintenance exclusion was arbitrary and unclear. As a result, to avoid enforcement actions and penalties, companies would opt not to make the modifications. On the other hand, the industry stakeholders disagreed with the environmental and public health stakeholders on a number of other potential impacts. First, all of the industry stakeholders believed the changes will encourage companies to invest in energy efficiency projects they avoided in the past because of NSR requirements. For example, as we discussed in our October 2003 report, under the prior program, to determine if a modification would increase emissions enough to trigger NSR, companies generally had to assume that facilities would run at the maximum capacity or the highest capacity allowed by the existing NSR permit after making the modification. A company had to make this assumption even if the facility had not run at this level in the past or was not expected to in the future. Industry stakeholders argued that having to assume this potential increase in emissions biased the test and overstated the true emissions impact of a project. One industry representative gave the example of a proposed modification that had the potential to save the company an estimated $300,000 per year and reduce emissions, but that the company did not pursue because the emissions test predicted it would have triggered costly NSR controls. In the December 2002 final rule, EPA revised the method of calculating the expected emissions so a company can project the actual activity level—as opposed to the maximum potential activity level—after the facility change and estimate the resulting emissions accordingly. Therefore, according to some of these stakeholders, such energy efficiency projects most likely will not trigger NSR requirements under the revised rule and will be less costly for companies to pursue. The industry stakeholders believed that, with the increased energy efficiency investments, facilities would use less fuel for the same levels of production. However, as we discussed in our August 2003 report, industrial facilities’ future production levels and air pollutant emissions may fluctuate in response to changing economic conditions and other factors. In that report, we also noted that the executive director of one industry trade association stated that it would make economic sense to increase production at more efficient facilities. The representative “could not imagine a utility spending money on extra capacity and then not utilizing it.” As a result, some environmental groups that disagreed with industry were concerned that, if facilities become more efficient, they will actually cause a net increase in overall emissions and health risks. On the other hand, according to an EPA program manager, the agency expected that, if a company increased production at its more efficient facilities, it could decrease production at its less efficient facilities, more than offsetting any emissions impact. However, the manager said that the agency had not analyzed the air pollution impacts of shifts in production that facilities make after implementing energy efficiency projects to support the agency’s viewpoint. The industry stakeholders we contacted believed the increased projects and lower emissions they anticipate will result more from the revisions included in the December 2002 final rule rather than the October 2003 rule. This is because, according to some stakeholders, the latter rule simply reinforces how companies had already been interpreting NSR in the past to determine if a modification was a routine replacement of equipment and, therefore, exempt from NSR requirements. However, the October 2003 rule specifies a 20 percent cost threshold, below which a company could make certain changes as routine replacement and exempt from NSR. Also, in contrast with the environmental and public health groups, some of the industry stakeholders argued that even with the NSR exemptions, companies will still have to monitor facility emissions and install emission control technologies because of other clean air regulations. For example, under the acid rain program, some utilities have had to control their facilities’ sulfur dioxide and nitrogen oxide emissions. Under the air toxics program, some companies have had to install controls to reduce facility emissions of hazardous air pollutants. In addition, the stakeholders maintained that state and local air quality agencies will still have to monitor any project that could increase emissions to ensure compliance with these programs, and the agencies may have their own requirements governing facility modifications. While this is true, we noted in our October 2003 report that the scope of the state and local program requirements varies widely. Finally, most of the industry stakeholders, unlike the environmental and health stakeholders, expected a decrease in the state and local air quality agencies’ workload as a result of the proposed and final revisions. The stakeholders claim the revisions will streamline agencies’ monitoring, minimize the time they spend determining if companies have properly complied with NSR, and ease the permitting process. While the stakeholders based their views primarily on professional opinion, one cited a DOE analysis and another cited an EPA analysis as support for their views. The DOE analysis included an estimate of emissions if all coal-fired power plants installed pollution controls while the EPA analysis focused on the possible emissions consequences of the equipment replacement exclusion. Neither analysis comprehensively assessed the impacts of the NSR revisions. One environmental representative compared the emissions levels in the DOE analysis with those in the EPA analysis to support the assertion that the exclusions would represent a rollback from the current program because the levels in the DOE analysis were lower than EPA’s. However, the DOE analysis is not useful as a benchmark for assessing the effects of EPA’s revisions because, under the NSR program, facilities only have to install the best available controls when making major modifications. In addition, this analysis was not specifically related to EPA’s NSR revisions. An industry stakeholder cited the above-mentioned EPA analysis of the equipment replacement rule to support the assertion that the exclusions would decrease emissions. However, the EPA analysis was limited in scope—it considered only power plants (the largest emitting category of facilities) and only two pollutants, nitrogen oxides and sulfur dioxide. Another related analysis performed by an EPA contractor included six additional industries and was based on case studies. Finally, as we concluded in our August 2003 report, the overall economic and environmental effects of the December 2002 rule are uncertain because of data limitations and difficulty determining how industrial companies will respond to the rule. EPA’s assessments of the December 2002 and October 2003 NSR revisions concluded that the rules would provide industry with greater flexibility to modify their facilities without having to obtain NSR permits or, in some cases, install pollution controls, while enhancing the program’s environmental benefits. The survey responses indicate that most state program managers agreed with EPA’s conclusion that the revisions would enhance flexibility for industry. However, a majority of state program managers did not agree with EPA’s conclusion that the increased flexibility would lead to less pollution, raising questions about the final and proposed revisions’ environmental effects. Specifically, most of the state officials believed that the December 2002 rule and the not-yet finalized annual maintenance allowance exclusion would increase emissions, and half believed the equipment replacement provision would have this effect. Furthermore, of those that believe emissions increases will occur, a number of the officials thought that these anticipated increases would cause violations of health-based air quality standards or delay the attainment of the standards in areas that already have poor air quality, potentially creating or exacerbating health risks. Environmental groups agreed with the state program managers who expressed concerns, but other state officials and industry stakeholders maintained the revisions would have positive environmental effects. Little data currently exist to resolve these competing viewpoints. We therefore recommended in our August 2003 report that EPA determine what data are available to monitor the December 2002 rule’s effects and use the monitoring results to determine what effects the rule has created. For the same reason, if the equipment replacement rule eventually takes effect—pending the resolution of legal challenges—it will be necessary to monitor its implementation to determine its environmental and other effects. In addition, more EPA assistance for states would help them implement the new rules and lessen their administrative burden. To ensure that state and local air quality agencies are adequately equipped to implement the new NSR rules, as required by EPA, and that the rules do not have unintended effects on emissions and public health, we recommend that the EPA Administrator (1) provide state and local air quality agencies with assistance in implementing the December 2002 rule, (2) pending the court’s decision on the equipment replacement rule, work with state and local air quality agencies to identify the data that the agency would need to monitor the effects of this rule and use the monitoring results to identify necessary changes, and (3) consider the state and stakeholder concerns about emissions and workload impacts that we identified before deciding whether to issue a final rule on the second proposed exclusion, the annual maintenance allowance exclusion. We provided EPA with a draft of this report for review. The Assistant Administrator for Air and Radiation said that the agency has concerns about our methodology and certain of our findings. Nevertheless, EPA said that our recommendations, on their face, make sense, and that the agency already has plans to take these actions. Specifically, EPA asserted that GAO (1) in some instances, used the opinions expressed in the survey responses—which EPA believes may not have been grounded in a correct understanding of the revisions—as fact, and to draw conclusions and make recommendations about the NSR program, (2) did not carry out its work in a way that assured balance and objectivity, (3) used a skewed survey sample, and (4) should have evaluated whether the survey results were consistent with the facts cited in EPA’s analyses of the revisions’ effects. GAO disagrees with each of EPA’s assertions. First, as we previously reported and EPA acknowledged, there are limited data available to assess the effects of the NSR revisions. Therefore, consistent with the review’s objectives, we solicited the opinions of key stakeholders on the revisions’ effects and clearly presented them as opinions in both the title and body of the report. When, in this context of scarce data, many state program managers responsible for program implementation express concerns about the revisions’ adverse effects, we believe it would be prudent to take these concerns seriously. As such, GAO makes a number of recommendations to (1) collect data on and monitor the revisions’ actual impacts, (2) consider stakeholders’ opinions before further revising the NSR program, and (3) provide state and local agencies assistance in implementing the revisions. Taking this latter action will help to address EPA’s concerns that the respondents’ may not have fully understood the revisions. Second, we developed the survey using standard survey research principles and took steps to minimize question bias, including conducting several pretests, asking respondents about both the positive and negative effects of the revisions, and subjecting the survey to a thorough review by a GAO survey specialist not involved in its development. To ensure the independence of our efforts, we do not routinely seek the subject agency’s review of our survey instruments. Nonetheless, we worked with NSR program managers within EPA to understand how the revisions would work in practice as well as their potential effects and used this information to design the survey questions. Third, GAO surveyed the universe of state program managers because we believe they are in the most informed position to determine the revisions’ impacts on their programs and workloads. Furthermore, in the survey’s instructions we asked the managers, when answering the questions, to coordinate with the officials within their agencies as they deemed necessary and appropriate. As such, we relied on each state agency’s own procedures for completing and reviewing the survey responses. In addition, we surveyed select stakeholders representing environmental, health, and industry interests. Because of the large number of other affected stakeholders, it was not feasible to survey the universe. Instead, we surveyed 30 organizations representing diverse perspectives and chose them because they were involved in national NSR policy decisions. A number of these groups represent the views of large numbers of industrial companies or have a national membership base. Finally, GAO believes EPA’s assertion that we should have evaluated whether the opinions of state officials responsible for program implementation were consistent with “facts” cited in EPA’s analyses is disingenuous. As we point out in our previous and current work, these “facts” are largely assertions based on EPA’s limited analysis of the revisions’ effects. We therefore did not use the agency’s analysis as a benchmark to evaluate the survey responses. We further believe that the state program managers provided plausible explanations for why their views disagreed with those asserted by EPA. Appendix III contains the text of EPA’s letter along with our detailed responses to the issues raised. EPA also provided a number of technical comments, which we have incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 10 days from the report date. At that time, we will send copies of the report to the EPA Administrator and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-3841 or stephensonJ@gao.gov. Key contributors to this report are listed in appendix IV. This summary provides an overview of survey responses completed by 45 local air quality agencies—those with independent authority to adopt rules and write, review, or issue New Source Review (NSR) permits. Seventeen of the 45 local air quality agencies are in California. The remainder are scattered across the remaining 14 states that have local air quality agencies. (See appendix II, table 3.) Detailed local survey results are available at: http://www.gao.gov/special.pub/gao-04-337sp. Similar to the state officials, more than half of the local officials expect that the December 2002 rule will provide industry with greater flexibility to make facility changes, but also believe that the rule will result in increased emissions. However, in contrast to the state officials, fewer than half of the local officials anticipate their workload to increase as a result of the rule. For some questions, the number of officials that provided answers varied. In terms of positive effects, 28 of 45 local officials believe the rule will result in greater flexibility for industry to make facility changes, similar to state officials. Also, 24 of the local officials believe that the rule will benefit industry by enabling companies to avoid NSR permitting. In addition, 10 of the officials identified greater opportunities for industry to pursue energy efficiency projects as one of the positive effects of the rule. On the other hand, only 2 of the local officials believe the rule will positively affect industry by providing companies with greater certainty as to when NSR applies to a facility modification. Twenty of the officials believe that regulatory uncertainty is one of the rule’s primary negative effects. As with the state officials, a majority (24 of 44) of local officials expect the rule to increase emissions, while 10 expect no change and 9 were unable to judge. More than half of the officials believe the revised methods for calculating a facility’s historical “baseline” emissions (25 of 44) and estimating emission changes from a modification (23 of 44) will lead to increased emissions. Fewer than half expect the remaining provisions to increase emissions. Twenty-two of the officials anticipate that the rule will allow facilities built prior to 1977—which did not have to install controls until they made a modification that significantly increased emissions—to increase total emissions because they can continue to postpone installing controls, while 11 anticipate no change in emissions from such facilities. Only 6 officials do not believe the rule will affect their ability to meet or continue to meet air quality standards. On the other hand, 16 expect that they can use other clean air regulations to meet standards, and 11 believe that taking into consideration the impacts of the final rule, these other regulations will not help them meet or continue to meet the standards. Although a greater percentage of local officials (21-24 percent) than state officials (6-7 percent) anticipate adopting or maintaining more stringent regulations than EPA, fewer local officials (22 of 45) than state officials (30 of 44) expect the rule to increase their workload. In addition, 5 of 43 officials do not anticipate the need for additional staff to adopt the final rule and obtain EPA approval in contrast to state officials. All of the local officials said they would like some type of assistance from EPA, such as implementation workbooks and training courses. Similar to state officials, at least half of the local officials expected the two exclusions for routine maintenance, repair, and replacement activities to provide industry greater flexibility to make changes, but unlike state officials, fewer than half expected the exclusions would increase emissions or their administrative burden. Overall, 22 of 45 officials said that they opposed the equipment replacement exclusion and 16 supported it, while 28 opposed the annual maintenance allowance exclusion and 5 supported it. More than half (24 of 45) of the local officials believed the exclusions would provide industry with greater flexibility to make facility changes, as did half of the state officials. Twenty-seven believed that not having to obtain a NSR permit would be one of the exclusions’ most positive benefits for industry. Thirteen of 43 local officials expected the exclusions to positively affect a company’s ability to pursue energy efficiency projects, while 16 expected no change. Eighteen of 45 officials expected the equipment replacement exclusion to increase emissions, while 14 expect no change, and 10 were unable to judge. Twenty-one of 45 officials expected the annual maintenance allowance exclusion to increase emissions, while 8 expected no change, and 15 were unable to judge. In addition, 21 of the 45 officials anticipated that facilities built prior to 1977 would increase emissions as a result of the exclusions, while 11 expected no change and 10 were unable to judge. These figures are similar to the state responses, however, compared with state officials, fewer local officials expected the exclusions to result in significant enough emissions changes to exacerbate air quality problems in areas that do not meet standards or cause new problems in areas that currently meet the standards. Unlike state officials, fewer than half (22 of 45) of the local officials believed the exclusions would increase their administrative burden. The Ranking Minority Member of the Senate Environment and Public Works Committee and Senator Lieberman asked us to obtain the views of a number of key stakeholders about the revisions’ potential impacts. More specifically, they asked us to obtain (1) state air quality agency officials’ views about the impacts of the December 2002 final NSR rule on industry, emissions, and agencies’ workloads; (2) state air quality agency officials’ views about the impacts of the two December 2002 proposed NSR exclusions on industry, emissions, and agencies’ workloads; and (3) environmental, health, and industry organizations’ views on the impacts of all the NSR revisions. In addition, we determined selected local air quality agencies’ views on the revisions’ potential effects. To address the first two objectives and gather information from local agencies, we conducted an Internet-based survey of 50 state air quality agencies, the District of Columbia, and the 71 local air quality agencies that have responsibility for implementing the New Source Review (NSR) regulations and could potentially issue NSR permits. To ensure that we obtained information from those that were most involved in the day-to-day administration of the NSR program and therefore in the best position to judge the revisions’ potential impacts, we worked with the 10 EPA regional offices and obtained information from the Internet Web site of the Association of State and Territorial Air Pollution Program Administrators (STAPPA) and the Association of Local Air Pollution Control Officials (ALAPCO) to identify the NSR program manager for each agency. The 15 states with local air quality agencies that issue NSR permits are listed in table 4 below. California is the only state with local agencies covering the entire state. For the other states, the local agencies are typically located in larger metropolitan areas with air quality problems. In order to present a national perspective of the issues faced by air quality officials, we focused on the responses from states and highlighted areas where local agencies had differing points of view. The state and local air quality officials survey was developed between December 2002 and April 2003. It includes questions to determine respondents’ views on the NSR program prior to the revisions, as well as the anticipated effects the proposed and final revisions would likely have on their programs. Because we administered the survey to all of the state air quality agencies and local agencies that have responsibility for implementing the NSR regulations and could potentially issue NSR permits, our results are not subject to sampling error. However, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents in answering a question, or the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in the development of the survey, the collection of data, and the editing and analysis of data for the purpose of minimizing such nonsampling errors. To reduce nonsampling error, we had cognizant officials from STAPPA and ALAPCO review the survey to make sure that they could clearly comprehend the questions and estimate the burden it would place on them. We also pretested the survey with three states and one local agency to ensure that (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the survey did not place an undue burden on agency officials, and (4) the survey was comprehensive and unbiased. In selecting the pretest sites, we sought to include agencies in states that supported the rules as well as those that did not. We also considered major subgroups such as states with and without local permitting authorities and locations across a wide geographical area. To determine what concerns, if any, those states involved in litigation against the Environmental Protection Agency (EPA) regarding the NSR reforms would have in completing the survey, we had an official from the New York State Attorney General’s Office (who is involved in the litigation) review the survey. We asked the official to identify those questions that states might refuse to answer because of litigation concerns. In the end, four states involved in the litigation did not respond to the survey. We made changes to the content and format of the final questionnaire based on the pretests. We conducted the survey using self-administered electronic questionnaires posted to GAO’s Web site on the Internet. We sent e-mail notifications to alert the appropriate officials of the forthcoming questionnaire. These were followed by another e-mail containing unique passwords and usernames that enabled the officials to access and complete the survey and notifying officials that the survey was activated. The questionnaire was available on the Web until July 7, 2003. We received responses from 44 states and 60 local agencies (each agency could only provide one response). In summarizing the survey data, the District of Columbia was included in the state responses. However, 15 of the local agencies that responded told us that they do not have the authority to adopt their own NSR regulations, or they do not write or issue NSR permits. Therefore, they were not eligible respondents and did not provide responses to our more detailed questions. Thus, 45 local agencies provided complete responses. The overall response rate was 83 percent. We edited all completed surveys for consistency and, if necessary, contacted respondents to clarify responses. Table 5 below lists the states that responded, by EPA region, as well as those that did not respond (listed in parentheses). It is important to note that four states in EPA Region 1 declined to respond so as not to disclose information about their ongoing NSR-related litigation. At the time we conducted our survey, we asked state and local officials about the impacts of a proposed exclusion from NSR for equipment replacement activities. Because EPA finalized this exclusion as a rule after we completed our survey, we took steps to determine whether the officials’ views on the proposal were also true for the final rule. For example, in December 2003, the national association representing state and local air pollution control officials told us that, based on their ongoing dialogue with state and local officials, the survey responses on the proposed exclusion were consistent with state and local officials’ views on the final rule. In addition, an EPA manager for the NSR program said that he does not anticipate that the officials who responded to our survey would have changed their opinions on this exclusion in the time since they responded to the survey, even though it was not yet in final form at the time they commented. To address the third objective, we identified key stakeholders involved in national level NSR policy decisions and sent them a survey via e-mail soliciting their responses to a number of questions about the proposed and final NSR revisions’ potential impacts on emissions, industry investments, and air quality agencies’ workloads. We distributed the survey to 30 organizations representing diverse industry and environmental interests. We used several criteria to select stakeholders for comment. For example, because of the large number of stakeholders involved in NSR issues at the national, state, and local level, we focused exclusively on groups that have a national perspective, including some law firms that represent several large industries. The stakeholders we selected included the following: groups identified by knowledgeable EPA officials as key stakeholders; members of EPA’s Permits/NSR/Toxics Subcommittee within its Clean Air Act Advisory Council (CAAAC) that have a national scope (CAAAC is a senior level policy committee consisting of approximately 60 senior managers and experts representing state and local government, environmental and public interest groups, academic institutions, unions, trade associations, utilities, industry, and other experts); national level groups that have testified in Congress on NSR and Clean Air Act issues over the last several years; national level groups that commented on EPA’s NSR proposals; and trade associations representing those industries identified by EPA as those most affected by NSR. We again took steps in the design, data collection, and analysis phases of the survey to minimize nonsampling and data processing errors, including pretesting of the survey questions, follow-up with those that did not respond promptly, and independent verification of all survey responses entered into an analysis database. We conducted two pretests of the survey and made changes to the content and format of the final questionnaire based on the pretests. The survey was sent to the key stakeholders on July 2, 2003, and was available until July 18, 2003. Of the 30 stakeholders contacted, the following 14 responded to this survey: American Forest & Paper Association; Clean Air Task Force; Council of Industrial Boiler Owners; Energy and Innovation Center, Environmental Law Institute; Hogan & Hartson LLP; Morgan, Lewis & Bockius LLP; National Environmental Development Association’s Clean Air Regulatory National Petrochemical & Refiners Association; National Resources Defense Council; and STAPPA/ALAPCO. We edited all completed surveys for consistency and, if necessary, contacted respondents to clarify responses. For all of these objectives, we worked with cognizant EPA officials, including the agency’s NSR program manager. Detailed survey results are available at: http://www.gao.gov/special.pubs/gao-04-337sp. We conducted our review from September 2002 through January 2004 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the letter from the Environmental Protection Agency dated January 23, 2004. 1. GAO does not agree with EPA’s assertions that we, in some instances, used the opinions expressed in the survey results as facts. Consistent with the review’s objectives, this report carefully characterizes the survey results as opinions. To this end, the report’s title clearly points out that we are presenting stakeholders’ views. In addition, based on our two earlier reports on the revisions, we determined that, at best, limited data exist on the effects of the prior NSR program or the potential effects of the revisions. In fact, as we found in our August 2003 report on the analytical basis for EPA’s December 2002 rule, EPA itself relied primarily on the professional judgment of agency staff and comments received on earlier NSR reform proposals, rather than a comprehensive quantitative analysis of the rule’s possible effects, in initially justifying the rule. That report also described limitations with the agency’s subsequent analysis of the rule’s environmental effects. Because of our earlier findings about data limitations, we believe it useful and entirely appropriate to supplement the available information with the informed opinions of those most involved in the day-to-day administration of NSR programs. GAO also disagrees with EPA’s characterization that we improperly used stakeholder opinions to draw conclusions and make recommendations about the NSR revisions. While recognizing that the results were based on opinion, it is important to point out that these were the opinions of those on the front lines of program implementation. In this case, these informed opinions raise important questions about the revisions’ effects. Because of these questions and, in light of the limited hard analytical data on the revisions’ effects, GAO recommends that EPA collect data on the revisions’ actual impacts. We made this recommendation in our August 2003 report regarding the December 2002 rule and again in this report regarding the equipment replacement exclusion. We further recommend that EPA consider these informed opinions before further revising the NSR program. EPA also questioned whether the respondents’ opinions were grounded in a correct understanding of the rules’ provisions. We believe that our recommendation that EPA provide state and local air quality agencies with assistance in implementing the revisions will help to address this concern. Despite its concerns, EPA said these three recommendations, on their face, makes sense and that the agency plans to take these actions. 2. GAO disagrees with EPA’s assertion that the way we carried out our work did not assure balance and objectivity. We developed the survey using standard survey research principles. This included taking steps to minimize question bias, asking respondents about both the positive and negative effects of the revisions, providing respondents with a range of answers to each question (including “no change” or “no effect”), and assessing each question for bias and problematic wording during an extensive pretesting and review process. We also sought to eliminate bias and problematic wording by subjecting the survey to a thorough review by a GAO survey specialist who was not involved in its development. Regarding the external review of our survey, we point out in the objectives, scope, and methodology section that we asked the trade association that represents state and local air quality control agencies (i.e., the officials we surveyed) to help ensure that their members could clearly comprehend the questions and estimate the burden it would place on them. We also asked a representative of the New York Attorney General’s Office to review the survey specifically to gauge whether they thought those states involved in lawsuits with EPA over the reforms would be concerned about completing the survey. Finally, to ensure the independence of our efforts, we do not routinely seek the subject agency’s review of our survey instruments. Nonetheless, we held discussions with staff in EPA’s Office of Air Quality Planning and Standards, Office of Enforcement, and regional offices to make sure we understood the technical nature of the revisions when developing the survey. In preparing our two prior NSR reports, we also worked closely with EPA’s managers of the NSR program to understand the agency’s assessment of how the revisions would work in practice, as well as the potential effects. We used all of this information to design the survey questions. 3. GAO disagrees with EPA’s assertion that our survey sample was skewed. GAO surveyed various stakeholders including state and local officials, as well as industry and environmental groups, and this report presents a range of views on the possible effects of the NSR revisions. We describe our methodology for selecting these stakeholders in the report’s objectives, scope, and methodology section. More specifically, with respect to our survey of state agency officials, we point out that we did not survey a sample, but the universe of state NSR program managers. We sent the survey to the manager of each state’s NSR program within the state environmental agency (instead of the agency head) because these program managers are responsible for day-to-day program implementation and hence are in the most informed position to determine the revisions’ impacts on their programs and workloads. Furthermore, in the survey’s instructions we asked the managers, when answering the questions, to coordinate with the officials within their agencies as they deemed necessary and appropriate. As such, we relied on each state agency’s own procedures for completing and reviewing the survey responses. In several cases, in fact, the program managers told us the reason they needed additional time to submit their responses to us was because the responses were under review by others within their agencies. EPA also questioned why we surveyed every state and local agency but only a handful of environmental groups and industry trade associations, and no individual industry officials. GAO gathered more detailed information from state and local agencies than from other stakeholders because these agencies generally implement the regulations and we were asked to obtain information on how this implementation would affect agencies’ programs and workload. Because of the large number of other affected stakeholders, it was not feasible to survey the universe. Instead, we surveyed key stakeholders that had been involved in national NSR policy decisions, which included 30 organizations, in order to obtain diverse industry and environmental perspectives. Of the 30 organizations we surveyed, 14 responded, and 8 of the respondents represented industry. A number of the organizations that responded represent large numbers of industrial companies, including the American Forest & Paper Association and the American Petroleum Institute. Likewise several of the environmental and health groups represent a national membership base, including the American Lung Association and the Natural Resources Defense Council. GAO also disagrees with EPA’s assertion that we focused disproportionately on the state officials’ unfavorable opinions of the rule. In presenting the state survey results, we generally listed the total number of officials responding to a question and information on the distribution of their responses. We then provided more detailed information about the majority’s opinion for each question, consistent with standard survey principles. In most cases, it turned out that the majority of respondents to our questions held the view that the revisions would have an adverse impact on emissions and their workload, contrary to EPA’s conclusions about the revisions’ impacts. We were very careful, however, to also discuss the number of respondents who held the minority view on a particular topic. 4. EPA’s letter suggests that GAO should have evaluated whether the survey results were consistent with the “facts,” asserts that many of the survey responses are not, and cites its own analysis of the revisions’ emissions impacts as factual support for its position. EPA also cites two examples of cases in which the agency believes respondents’ opinions conflict with fact. EPA’s comment related to the “facts” however, largely only represents references to its own assertions. First, as discussed previously we have identified limitations with EPA’s analysis of the revisions’ impacts. As we stated in our August 2003 report, a senior EPA economist said that uncertainty about the extent to which companies might elect to use the NSR alternatives in the December 2002 rule limited the agency’s ability to estimate the rule’s impacts. For these reasons, we did not use EPA’s analysis as a benchmark to evaluate the survey responses. Again, in this context, the opinions of key stakeholders, especially those responsible for implementing the regulations, provide an important perspective appropriately considered by congressional decisionmakers. Second, as to EPA’s examples of opinions conflicting with facts, the agency suggests that the opinions of those who expect the December 2002 rule to delay attainment of air quality standards are incorrect. In its first example, EPA states that (1) even if emissions increased, the increase would be small and dwarfed by decreases coming from other air quality regulations, and (2) facilities affected by the revisions either already have emissions controls or would have to have them to qualify for many of the rule’s exemptions, such as those for plantwide emissions limits, clean units, and pollution control projects. Regarding the first point, as we report, 7 officials agreed with EPA and said that the rule would not impede their ability to meet or maintain air quality standards. Another 14 expect they will offset the anticipated increases using other air quality regulations. A minority of the respondents (13) said that, despite these other regulations, they would still have difficulty meeting or maintaining air quality standards. Therefore, these 13 officials already took into account the other air quality regulations EPA cites. Regarding the second point, EPA did not mention the rule’s key exemption—the revised method for determining facilities past emissions—that does not require that facilities have emissions controls, and was the provision cited most often by the state officials as likely to lead to emissions increases. While EPA maintains that this provision will not have a significant environmental impact, agency managers for the NSR program acknowledged that EPA’s analysis justifying its position was not based on a statistically valid sample of affected facilities. Ultimately, many stakeholders disagreed with EPA’s assertions. Regarding the second example, GAO agrees that these opinions conflict with EPA’s information but also believes the state officials provided plausible explanations for why they expect their burden to increase even though they expect to issue fewer permits. As we point out in the report, some of the officials said that they find the December 2002 rule confusing, complicated, and leading to more uncertainty about the NSR program—all of which can contribute to agencies’ workloads. While EPA asserts that the rule will lead to an overall reduction in workload for agencies based on its experience with six states that have used flexible permitting systems, our survey results found that four of these same states said the opposite—they expect the rule to increase their workload (one had not assessed such impacts, and the other did not know what the effect would be). Officials from the four states said they would spend more time drafting laws, regulations, and guidance, as well as processing permits, explaining the rule to industry, and training staff. Furthermore, officials from the four states said that they expected more work associated with the rule’s revised method for determining facilities’ past emissions. Therefore, GAO disagrees with EPA’s assertion that the experience of these states shows that the rule will reduce agencies’ workloads or that the survey results are contradictory. In addition to the individuals named above, Ulana Bihun, Michael Hix, Jeffrey Larson, Lisa Turner, and Laura Yannayon made key contributions to this report. Nancy Crothers, Bob DeRoy, Tim Guinane, Karen Keegan, Judy Pagano, Minette Richardson, and Monica Wolford also made important contributions.
Environmental Protection Agency (EPA) revisions to the New Source Review (NSR) program to control industrial emissions have drawn attention from state and local agencies that implement the program, as well as industry and environmental and health groups. Under the revisions, companies may not have to install pollution controls when making some facility changes. GAO was asked to obtain the opinions of state air quality officials and other stakeholders on the impact of both the final and proposed revisions EPA issued in December 2002. GAO obtained survey responses from NSR program managers in 44 states and certain localities and contacted six environmental and health groups, and eight industry groups active in the NSR debate. Survey details are available in GAO-04-337SP . A majority (29 of 44) of the state officials responding to GAO's survey expected the rule EPA finalized in December 2002 to provide industry with greater flexibility to make some facility changes without having to obtain NSR permits or, in some cases, install pollution controls. However, in their opinion, 27 officials expected the rule to increase emissions of harmful air pollutants, thereby hindering areas' efforts to meet air quality standards and potentially creating or exacerbating public health risks. This concern\ contrasts with EPA's assessment that the rule will decrease emissions and maintain the current level of environmental protection. Furthermore, 30 of the officials expected their agency's workload would increase as they adopt and implement the rule into their own programs. Almost all of the 44 officials would like EPA assistance with implementation. Similarly, 28 of the 42 officials responding expected the two NSR revisions as proposed in December 2002--intended to provide more certainty about when facility changes are considered routine maintenance, repair, and replacement activities and can be excluded from NSR requirements--to decrease the number of permits companies would have to obtain, thereby giving them the flexibility to make some changes without installing controls. However, 21 and 26 officials, respectively, thought that the two exclusions would increase emissions; only relatively few thought the exclusions would decrease emissions as EPA's analysis had predicted. About a third of the officials thought the exclusions would exacerbate air quality problems in areas that do not meet standards, but fewer officials thought the exclusions would cause problems in areas that currently meet standards. Finally, 27 thought that implementing the two exclusions would increase states' administrative burden. The other stakeholder groups GAO contacted agreed that the final rule and two exclusions would decrease the regulatory burden on companies that modify their facilities, but disagreed about the impact on emissions and air quality agencies' workload. The six environmental and public health officials expected that because companies would not have to obtain as many NSR permits or install as many controls when modifying facilities, emissions would rise and state and local agencies' workloads increase as agencies sought alternative ways to meet standards. In contrast, the eight industry officials expected the revisions to encourage companies to invest in energy-efficient projects they had avoided under the prior program, which the officials believed would lower fuel use and emissions. The officials also expected that fewer permits would lead to decreases in agencies' workloads. Determining the revisions' likely impacts is difficult because, as discussed in GAO's August 2003 report on EPA's analytical basis for the final rule (GAO-03-947), little data exist to confirm stakeholders' opinions. In that report, GAO recommended that EPA work with state and local agencies to obtain data to assess the rule's emissions impact and correct any adverse effects.
Dioxins persist for a long time in the environment because they do not dissolve in water and are relatively immobile in soil and sediment. When animals consume plants, feed, and water contaminated with dioxins, they accumulate in the animals’ fatty tissue. Similarly, when humans consume these animals, the dioxins then accumulate in human fatty tissue. According to EPA, because dioxins also persist in the body for years, recent significant reductions in dioxin emissions into the air are unlikely to reduce human health risks in the near term. While EPA estimates that most exposure to dioxins occurs from eating commonly consumed foods, the draft reassessment report also estimates that limited exposure to dioxins results from breathing air containing trace amounts of dioxins; inadvertently ingesting soil containing dioxins; and absorbing through the skin minute levels of dioxins present in the soil. Some people may experience higher exposure levels than the general population as a result of food contamination incidents; workplace exposures; industrial accidents; or consuming unusually high levels of fish, meat, or dairy products. When calculating human exposures, dioxins are measured in picograms—that is, trillionths (0.000000000001) of a gram. Highly sophisticated measurement techniques and technologies are required to test foods for the presence of the 29 dioxins identified as having toxic effects. The several hundred known dioxin compounds can be placed in one of three closely related families: polychlorinated dibenzo-p-dioxins (CDD), polychlorinated dibenzofurans (CDF), and polychlorinated biphenyls (PCBs). CDDs and CDFs are byproducts of combustion and some industrial processes. According to EPA, U.S. emissions of CDDs and CDFs into the environment declined by 75 percent between 1987 and 1995 primarily as a result of reductions in emissions from municipal and medical waste incinerators. Some PCBs share certain characteristics with CDDs and CDFs and therefore are identified as “dioxin-like.” PCBs were at one time manufactured for use in products such as lubricants and industrial transformers but have not been made in the United States since 1977. However, because dioxins break down so slowly, past emissions remain in the environment for years—even decades—before they diminish. Consequently, a large part of humans’ current exposure to dioxins is due to releases of dioxins that were stored in soil and sediment, and to a lesser extent in vegetation and the atmosphere. These sources are called “reservoir sources.” EPA believes that with the reduction in current emissions from combustion and incineration, these reservoir sources have taken on more significance. According to EPA, dioxins always occur in the environment and in humans as complex mixtures of individual compounds. However, the complex nature of the dioxin mixtures to which people are exposed (through foods or other sources) complicates evaluation of the health risks such mixtures might pose. Scientists therefore developed the concept of toxic equivalency factors (TEFs) to facilitate risk assessment of exposure to these mixtures. Because TCDD is the best-understood dioxin, it is used as a frame of reference for estimating the toxicity of the other dioxins, and its TEF is set at 1.0. Only 1 of the other 28 dioxins included in EPA’s reassessment has a TEF of 1.0; most of the others have TEFs of 0.1 or less, meaning that they are considered less toxic to humans than TCDD. International experts review and periodically update the TEFs based on new data. For its reassessment of dioxins, EPA used the latest revisions that were made at an expert meeting organized by the World Health Organization in 1997. Since 1991, EPA has been updating its initial 1985 report assessing the health risks of dioxins. The October 2001 draft reassessment report exceeds 3,000 pages. Part I of the draft report provides information on exposure to dioxins, including chapters on dietary intake; part II addresses health assessment methodologies and specific health effects; and part III, the Integrated Summary highlights information in parts I and II on exposure and health effects and provides a risk characterization—a statement summarizing EPA’s assessment of the health risks associated with dioxins. In the reassessment, EPA studied the risks of cancer as well as noncancer health effects, such as neurological and reproductive impairments. Founded in 1948, the World Health Organization (WHO) is a specialized agency of the United Nations, with 191 member states. WHO’s functions include giving worldwide guidance in the field of health and setting global standards for health. WHO carries out these functions through a variety of offices and programs that often collaborate with each other and with other public health entities of WHO’s member states and nongovernmental organizations. The principal contributors to the WHO reassessments of dioxin risks that are discussed in this report have been (1) the International Agency for Research on Cancer, which coordinates and conducts both epidemiological and laboratory research into the causes of cancer; (2) the WHO European Centre for Environment and Health, which coordinates comprehensive efforts, in collaboration with the International Programme on Chemical Safety, to evaluate the possible health risks of dioxins as well as methods of prevention and control of environmental exposure of the general population to these chemicals; and (3) the Joint Expert Committee on Food Additives of the United Nations’ Food and Agriculture Organization and WHO, which provides scientific evaluations as a basis for the development of food standards by the Codex Alimentarius (food code) Commission. To estimate dietary exposure to dioxins, EPA obtained and reviewed information on (1) the dioxins present in 10 types of foods with high fat content, (2) the toxicity of individual dioxins contained in these food types, and (3) the quantities of these foods that people in the United States typically eat. EPA has incorporated new studies following improvements in analytical capabilities to detect dioxins in food during the 1990s. However, in its draft reassessment report, EPA identified a number of limitations with the food data used to estimate dietary exposure that add uncertainty to the agency’s overall estimate of current average daily dietary exposure to dioxins. For example, in some cases, the studies available on the presence of dioxins in foods were not designed to estimate national averages. Further, while EPA used the accepted method for estimating the toxicity of the dioxins found in the 10 food types, EPA and others acknowledge that the method has limitations. Finally, EPA estimated the quantities of these foods consumed using U.S. Department of Agriculture (USDA) data on U.S. adults’ food consumption based on surveys made between 1989 and 1991; however, EPA believes the dietary habits of Americans have changed very little over the course of the past decade. A body of scientific research on foods in Europe, North America, and other locations indicates that the primary source of human exposure to dioxins is the dietary intake of foods, especially those containing animal fat. According to EPA’s October 2001 draft reassessment report, the average adult in the United States receives about 95 percent of his or her exposure to dioxins by eating commonly consumed foods, such as beef, pork, and poultry; fish; and dairy products. (EPA estimated small exposures to dioxins from the air and soil as well.) The 10 types of foods EPA analyzed for its reassessment are beef; pork; poultry; other meats, such as lamb and baloney; eggs; milk; dairy products, such as cheese and yogurt; freshwater fish and shellfish; marine fish and shellfish; and vegetable fat, such as corn and olive oils and margarine. These foods, only one of which is not of animal origin, are believed to be the major contributors to dietary exposure to dioxins. Even though vegetable fat products are estimated to contain low levels of dioxins, EPA included these foods in its analysis because they are high in fat and common in the American diet. EPA excluded fruits and vegetables from its analysis because data on dioxins in U.S. fruit and vegetable products, which generally contain little or no fat, are extremely limited. The existing data indicate that typically these products contain low levels of dioxins, which generally stem from residues—deposits on outer layers with little penetration to inner portions. Until recently, chemical analyses of dioxins in foods have focused primarily on two of the families of dioxins, the CDDs and CDFs, with less attention on identifying and measuring specific PCBs. The draft reassessment report includes an evaluation of PCB levels in the 10 food types. The draft report identifies estimated exposures to CDDs and CDFs together and identifies the estimated exposure to PCBs separately. This approach provides information that can inform potential regulatory approaches, among other things, because CDDs and CDFs result primarily from combustion and industrial processes, whereas PCBs, which persist in the environment from the 1970s and earlier, are no longer being manufactured. As shown in table 1, EPA estimated that the average adult in the United States is exposed daily to about 63 picograms of dioxins through dietary intake, with the highest exposure coming from beef and freshwater fish and shellfish. According to EPA, this exposure level is close to the level that has caused adverse noncancer effects in animals, such as effects on the development of reproductive systems. It is important to note that EPA’s dietary exposure estimates are averages, and they do not apply to adults with additional or unusual exposure to dioxins—for example, from diets unusually high in fat content or diets of foods high in dioxin content. To estimate any population’s dietary intake of dioxins, the specific dioxins present in the various foods must be identified and measured through chemical analyses of the foods. However, reliable estimates of the average concentrations of dioxins in specific foods nationwide have only recently begun to be available. In the past, data were available only from studies of dioxin concentrations in a specific food product or products in a specific location or a few locations, and these data were not sufficient to reliably estimate average national exposure. During the 1990s, as analytical capabilities to detect dioxins at parts-per-trillion levels were developed, new studies of foods in the United States, some with broader scope than the earlier studies, became available. EPA has incorporated new studies into its analysis of dietary exposure to dioxins to try to develop more reliable national estimates of such exposure. As a result, the estimates presented in the October 2001 draft reassessment report are based on more food data than the drafts developed just a few years ago. Nevertheless, in its October 2001 draft reassessment report, EPA said that the amount and the representativeness of the food data it used to estimate the average U.S. adult’s dietary exposure to dioxins vary. Further, EPA officials acknowledged that some of the available studies were not designed to estimate national average exposures. As discussed below, the food sample data are limited in part by the timing of the sampling, variations in the methods used to collect the samples, and the types of samples collected and analyzed. In commenting on a draft of this report, EPA officials said that these food data limitations do not represent major weaknesses in its estimates of dietary exposure to dioxins. As reported in the draft reassessment, most of the food samples were collected between 5 and 8 years ago. Current samples would be expected to have lower dioxin levels because emissions containing dioxins declined by about 75 percent from 1987 to 1995, and EPA believes the downward trend is continuing. Nevertheless, EPA believes that the exposure estimates based on food data from the mid-1990s are representative of current dietary exposure for several reasons. First, EPA believes that because most of the food samples the agency used for its reassessment were collected after the 75-percent decrease in emissions, much of the decrease should already be reflected in the foods’ dioxin concentration numbers. Second, EPA said that, because most municipal and medical waste incinerators are located far from and downwind of concentrated meat and dairy production areas, the impact of any emission reductions since 1995 on the commercial food supply should be proportionately less than on the environment in general. Third, EPA said that because reservoir sources of dioxins account for half or more of current exposure, and because some sources of dioxins are unknown, it is unlikely that emission reductions that occurred after most of the food samples were taken would significantly affect the current estimate of general population exposure from the commercial food supply. According to EPA, while its analyses of some of the foods are based on national samples collected from food processing or food monitoring locations, such as federal slaughtering establishments, other analyses are based on limited “market basket surveys”—random purchases of selected products, such as eggs, direct from grocery stores—in a small number of U.S. and Canadian cities. Depending on their design, national surveys would generally be more representative of average dietary exposures to dioxins than limited surveys. Some of the analyses of the foods were derived from individual food samples, while others were from composite samples. Using composites is more economical than using individual samples, and EPA believes they are appropriate for use in analyzing dioxin concentrations to establish average, or mean, exposure estimates. However, EPA acknowledges that data on the variability or range of results from individual samples typically are not available from studies analyzing composite samples. As a result, information that can provide insight into the reliability of the estimates is not available. Table 2 shows the number(s), type(s), and date(s) of the samples EPA used for each of the 10 food categories. According to the draft reassessment report, data from some of the food studies were sufficient to estimate exposure to total dioxins—the CDDs, CDFs, and PCBs. However, the report shows that in other cases, the data only provided support for estimating exposure to CDDs and CDFs. As a result, for four food categories, different studies are used for estimating exposure to CDDs and CDFs than those used for PCBs. As reported in the draft reassessment, these studies analyzed fewer samples, a number of which were collected 14 or more years ago and therefore provide data on dioxins that may not reflect current levels. Table 3 shows information on the four foods for which EPA used different samples to estimate exposure to PCBs than those used to estimate CDDs and CDFs. As EPA acknowledges in the draft report, its analyses of dioxins present in foods are based on uncooked foods, even though dioxin levels can be different in cooked and uncooked foods. According to EPA, while many studies indicate that foods have similar dioxin concentrations whether they are cooked or uncooked, the studies show that some foods have lower concentrations of dioxins when they are cooked, while others have higher levels when they are cooked. These differences reflect, in part, the fact that different cooking methods (frying, boiling, grilling, etc.) may have different effects on dioxin levels. On the basis of the available data, which it believes are not conclusive, EPA states in the draft reassessment report that uncooked food is a reasonable surrogate to use for identifying and quantifying dioxin concentrations in cooked food. Because the primary focus of EPA’s exposure assessment was on foods produced and consumed in the United States, EPA’s analysis does not address imported food products that may vary from domestic sources in dioxin content. Despite these limitations, the data on dioxin levels in foods supporting the October 2001 draft report reflect a significant improvement compared with the data EPA had available for use in its 1994 draft reassessment report, which was peer reviewed in 1995 by EPA’s Science Advisory Board. Specifically, in the 1994 draft, EPA provided estimates of levels of CDDs and CDFs for seven food types; the October 2001 draft provided estimates for ten food types. With the exception of an estimate for fish that was based on 60 samples, the 1994 draft estimates were developed from samples ranging in number from 2 to 14; as table 2 shows, the number of samples used for the 2001 draft is greater. In addition, while EPA recognized that PCBs were being identified in foods, the agency did not have sufficient data at that time to develop estimates of the levels of specific PCBs in foods; the 2001 draft does include estimates of PCB levels in foods. The following sections describe in greater detail the samples EPA used to identify the level of dioxins in 9 of the 10 foods studied—beef, pork, and poultry; freshwater and marine fish; milk, dairy, and eggs; and vegetable fat—and any associated limitations or uncertainties. (The draft reassessment report does not provide any information supporting EPA’s estimate of the types and amounts of dioxins in other meats, the tenth food type. In commenting on a draft of our report, EPA said that information on other meats would be provided in its final report.) In estimating exposure to dioxins from beef, pork, and poultry, EPA used data from the first statistically designed national surveys of dioxin levels in these foods sponsored by EPA and USDA. These surveys were designed to be representative of all U.S. regions and all classes of animals slaughtered in federally inspected slaughtering establishments. EPA believes the three surveys provide reasonable estimates of the average national concentrations of dioxins in beef, pork, and poultry. Nonetheless, information EPA provided in the draft reassessment report about these samples identifies some limitations and uncertainties about these studies. The samples are now between 6 and 8 years old and therefore may not reflect current exposures. To address this data gap, EPA and USDA are conducting a follow-up study on dioxin levels in beef, pork, and poultry that will commence in 2002 and provide updated information. However, EPA officials said the results of this survey will likely not be available for incorporation into the dioxin reassessment report that EPA plans to publish this year. The animal samples for beef, pork, and poultry were not meat products sold in grocery stores but rather were cuts of fat generally not consumed—either back fat, abdominal fat, or belly fat from slaughtering establishments. Some uncertainty therefore surrounds the accuracy of EPA’s estimates of dietary intake of dioxins because of comparability concerns. EPA used this approach because USDA federal inspectors could obtain the samples with little disruption to the slaughtering establishments and because the samples’ high fat content would enable more accurate measurement of dioxins, since the analysis would be of highly concentrated fat samples. However, this approach assumes that edible meat products sold in grocery stores contain the same types and amounts of dioxins as the fat samples (adjusted for differences in percentages of fat). According to EPA, this assumption is supported by a well-developed understanding of the manner in which dioxins distribute across fat reservoirs in vertebrates. Therefore, EPA concluded that the fat samples for all three foods were comparable to the edible meat samples. EPA also based its conclusion on its analysis of beef samples—comparing five back fat samples with other cattle parts, including muscle tissue, which could be representative of edible beef products. For the five samples, the ratios of CDDs and CDFs in muscle fat to CDDs and CDFs in back fat varied by up to 300 percent, ranging from 0.58 to 1.7; and the ratios for PCBs varied by up to 50 percent, from 1.0 to 1.5. Although some of the variation may result from imprecision inherent in measuring picograms, this limited analysis indicates that using fat samples may overstate or understate to some extent the dioxin levels in beef, pork, and poultry products. EPA reported that it excluded 2 of the 80 samples of abdominal fat from poultry because they had significantly higher concentrations of certain dioxins than the other samples. EPA, USDA, and the Food and Drug Administration investigated the cause of these elevated dioxin levels and determined that it stemmed from contaminated animal feed that had been distributed to poultry, fish, hog, and cattle producers in several southern and southwestern states. EPA considered the two poultry fat samples inappropriate for the dioxins study, which was aimed at identifying typical exposures to dioxins. However, it is not clear that the poultry samples with high concentrations of dioxins were anomalies because the incidence of dioxin contamination in animal feeds is not known. For example, this instance of contaminated animal feed was discovered by the first national poultry sample, which tested only 80 samples nationwide. In response to suggestions from a peer review panel, when the data were sufficient to do so, EPA presented a standard deviation—the typical amount of variability around the mean—on its estimates of the average levels of dioxins in the foods, as well as the range of the levels of dioxins identified in the samples. For beef, pork, and poultry, EPA was able to provide this information for the CDDs and CDFs. These data indicated considerable variability in the levels of CDDs and CDFs in the foods. For example, the estimated level of 0.28 picograms of dioxins in a gram of pork has a standard deviation of plus or minus 0.28. In other words, the standard deviation is equal to or greater than the mean. Accordingly, the estimated dioxin level is subject to a wide range of uncertainty. Because EPA did not have sufficient information to develop a standard deviation for PCBs, the agency could not develop a standard deviation for total dioxins (the combination of CDDs, CDFs, PCBs) in beef, pork, and poultry. As a result, EPA could not state with any degree of certainty that exposure to total dioxins or to PCBs would fall within specified levels. EPA believes this limitation is a minor one because it considers the average exposure level, rather than the more limited extreme exposures, to be of greater public health interest. Nonetheless, this additional analysis, if available, would enable policymakers, scientific peer reviewers, and other users to better evaluate the extent to which the data may be representative of average national exposures. Though EPA analyzed more fish samples for the current reassessment draft than for earlier drafts, the current draft report acknowledges that the levels of dioxins in fish are more uncertain than those in the other foods for two reasons. First, the data lack the “geographic coverage and statistical power” of the other food surveys. That is, while the sample sizes for CDDs and CDFs in fish are considerably larger than those used for the analyses of other foods, they do not provide data that are nationally representative because of the diversity of fish and bodies of water. Specifically, there are a significant number and variety of freshwater and marine fish species living in numerous bodies of water that contain differing types and levels of dioxins. Moreover, fish consumed in the United States include both farm-raised and wild fish. Second, EPA based its estimates for levels of PCBs in fish on a much smaller data set than it used for CDDs and CDFs. EPA used 222 samples to estimate the levels of CDDs and CDFs in freshwater fish and shellfish and 158 samples for marine fish and shellfish compared with 7 and 6 composite samples for PCBs for freshwater and marine fish, respectively. Further, most of the samples for PCBs were from Canadian rather than U.S. cities, and the analyses of levels of PCBs in them did not evaluate all of the PCBs identified as being toxic. For example, according to the report, only one of the composite samples for marine fish and shellfish, collected between 1984 and 1986, was analyzed for the presence of the most common and toxic PCB, referred to as PCB-126. For these reasons, EPA acknowledges in the draft report that the resulting estimates are not representative of the level of dioxins in fish nationally. We note that the limitations of the data used to estimate the levels of PCBs in fish are particularly significant because in the report, EPA estimates that freshwater fish contains the highest levels of PCBs (and total dioxins) of all the foods studied. The samples EPA used to estimate the levels of dioxins in fish were derived from EPA’s National Bioaccumulation Study and three market basket surveys in the United States and Canada. Samples for the bioaccumulation study were collected between 1986 and 1989, whereas the samples for the market basket surveys were collected about a decade later, between 1995 and 1999. Some of the limitations and uncertainties associated with these samples that EPA acknowledged in the draft report are highlighted below. Most of the fish samples used for the reassessment draft were collected 5 or more years ago; some are between 13 and 16 years old. EPA did not have sufficient data to estimate exposure to PCBs from eating freshwater or marine shellfish. Some of the estimates for freshwater fish, such as trout, are based on samples from the bioaccumulation study that may be more representative of wild fish (i.e., fish caught in recreational fishing) than fish typically purchased by the general population at grocery stores, which is largely farm-raised. Specifically, in cases in which EPA did not have data on farm- raised freshwater fish or fish purchased in grocery stores, the agency used the concentration of CDDs and CDFs from samples of wild caught fish from the bioaccumulation study. This use of older data on wild fish increases the uncertainty about the representativeness of EPA’s exposure estimate. For some fish species, such as mullet and mackerel, estimates were based entirely on samples collected in the Mississippi area and therefore may not be representative of levels seen in other locations. EPA did not have sufficient data to estimate a standard deviation for the average levels of dioxins in freshwater or marine fish. As a result, EPA cannot state with any degree of certainty what the related dietary exposure to dioxins is. The milk samples upon which both the milk and dairy estimates are based came from a national survey. In this survey, samples were collected during the four seasons, providing information on seasonal (temporal) variations. The milk samples were collected from 51 sampling stations, located in a majority of the states, that support EPA’s Environmental Radiation Ambient Monitoring System. In contrast, the estimates for CDDs and CDFs in eggs are based on Food and Drug Administration market basket surveys in 1997 in California, Georgia, Minnesota, New York, Ohio, Oregon, Pennsylvania, and Wisconsin. The estimates for PCBs in eggs are based on market basket surveys in San Diego, California; Atlanta, Georgia; Binghamton, New York; and five major Canadian cities. EPA used composite samples of milk and eggs to identify and measure the presence of specific dioxins in milk, dairy, and eggs. Information provided in the draft report identifies some limitations associated with these data. Most of the milk samples were collected 6 years ago. The egg samples used to support the analyses for CDDs and CDFs were collected 5 or more years ago. The estimates for PCBs in eggs are based, in part, on samples obtained in five Canadian cities between 14 and 16 years ago. Only one of the six composite samples used to estimate the level of PCBs in eggs was analyzed for the presence of the most common and toxic PCB. According to EPA, its estimates of CDDs and CDFs in vegetable fat were developed from a market basket survey that was not representative of edible oil consumption in the United States. The 30 samples of various oils, solid shortening, margarine, and an oil spray were obtained from grocery stores in nine U.S. cities or metropolitan areas: Chicago, Illinois; Cincinnati, Ohio; Denver, Colorado; Miami, Florida; Minneapolis, Minnesota; Salt Lake City, Utah; San Antonio, Texas; San Francisco, California; and the Washington, D.C., metropolitan area. Although neither the reassessment report nor the study (published in 1996) states when the samples were collected, there is typically at least a 1-year lag between collection and publication, indicating that the samples were collected 8 or more years ago. EPA used limited data to estimate the level of PCBs in vegetable fat. This estimate is derived from five composite samples of cooking fats and salad oils, each of which was obtained 14 or more years ago from one of five major (unidentified) Canadian cities. As a result of these limitations that EPA identified in the draft report, the estimate for dietary exposure to dioxins from eating vegetable fats is unlikely to reflect current average dietary exposure in the United States. After using the chemical analyses discussed previously to identify the types and quantities of dioxins present, EPA estimated the toxicity of the dioxins in the 10 types of foods, using measures called toxic equivalency factors. (As noted earlier, these measures—called TEFs—are used to create a frame of reference by comparing the potential toxicity of individual dioxins in a sample with the toxicity of the most toxic and best understood dioxin, TCDD, which is assigned a TEF of 1.) EPA used the TEFs that were updated by WHO in 1997. For each of the types of foods, EPA multiplied the measured types and amounts of the dioxins present by the related TEFs to arrive at a “dioxin toxic equivalence value” for that particular food category/dioxin combination. For each food category, the total dioxin toxic equivalency is the sum of these products—that is, the sum of the toxic equivalence values for (1) CDDs and CDFs and (2) PCBs. This provides an indicator of the relative toxic concentration of dioxins in each food category. As table 4 shows, EPA estimated that freshwater fish and shellfish had the largest per-gram concentration of dioxins with toxic effects. The toxic equivalence approach using TEFs has evolved over the last 20 years and is the internationally accepted scientific approach for risk assessments of dioxins. This approach has been formally adopted by several countries and as guidance by international organizations, such as WHO. TEFs are used to decrease the overall uncertainty in assessing the health risks of dioxins because they provide a framework for addressing the complex mixtures of dioxins to which people are most often exposed. Nonetheless, a number of uncertainties are involved in the use of the TEF concept. As a result of these uncertainties, estimates of the concentrations of dioxins in foods based on this approach may be overstated or understated. The draft reassessment report acknowledges that there are still many questions about the use of the TEF method and the validity of some of the underlying assumptions. The report states that many assumptions are necessary because of lack of data. Specifically, the derivation of TEFs is limited by the amount of available data on the relative potency of different dioxins compared with TCDD. For many dioxins, the available data on relative potency may be limited to only a few experimentally observed effects. Some of these effects may not be considered toxic by themselves, but they still might provide evidence that exposure to dioxins led to biological or chemical effects in experimental subjects. For example, EPA noted that only TCDD and one mixture of certain dioxins have been tested for carcinogenicity. Therefore, in order to develop a TEF that estimates the cancer potency of a mixture including other dioxins, scientists have assumed that the relative potencies observed for noncancer effects approximate those for cancer. In other words, once derived, TEFs apply to all effects, not just those for which relative potency data were available. Nonetheless, after considering a number of the uncertainties and limitations of this approach, the international experts who derived the current TEFs concluded that the TEF concept is still the most plausible and feasible approach for risk assessment of dioxins. Furthermore, the TEF values for individual dioxins are reevaluated and updated periodically to reflect the available evidence. When WHO established the most recent TEFs in 1998, it suggested that the toxic equivalency scheme be reevaluated every 5 years and that the TEFs and their application to risk assessment be reanalyzed to account for emerging scientific information. To develop its estimate of the daily dietary intake of dioxins by the average adult in the United States, EPA needed to calculate the amount of food containing dioxins that Americans typically eat. EPA obtained this information for the 10 food types from USDA food intake surveys. The USDA survey data include information on the amounts of specific foods consumed in a day by an average person weighing 70 kilograms (154 pounds). USDA obtained its data from detailed food surveys prepared by thousands of individuals selected from statistical samples. In these surveys, individuals generally provided detailed information on food consumption for 2 days. The surveys used statistical sampling to ensure that all seasons, geographic regions of the United States, and demographic and socio- demographic groups were represented. EPA’s analysis of these data tabulated intake rates for the major foods, as well as for individual food items. The total quantity of each food eaten by the survey population in a survey day was tabulated and weighted to represent the quantity eaten by the entire U.S. population in a typical day. For the draft reassessment report, EPA averaged USDA’s data for three age groups of adults ranging from ages 20 to 70 and over. Table 5 provides EPA’s estimates of the daily dietary intake of 10 food types by adults in the United States. While EPA prefers to use USDA food data from the 1989-91 USDA Continuing Survey of Food Intake By Individuals because it has conducted a statistical analysis of these data and includes them in the agency’s Exposure Factors Handbook, the draft reassessment report uses other data for fish and does not provide information on the basis for its estimates of dietary intake of other meats. Specifically, the draft reassessment report derived its estimates of the daily dietary intake of beef, pork, poultry, milk, dairy products, vegetable fats, and eggs from USDA’s Continuing Survey Food Intake By Individuals conducted from 1989 through 1991. In contrast, the daily dietary intake of freshwater and marine fish and shellfish were derived from a March 2000 report on the consumption of fish prepared by EPA’s Office of Water. This report used data from USDA’s Continuing Survey of Food Intakes By Individuals conducted in 1994, 1995, and 1996. In this report, EPA weighted its estimates of exposure to dioxins from fish by the species-specific concentrations according to species-specific fish consumption rates for the U.S. population. However, in cases where species-specific concentration data were not available, EPA used default values. For example, EPA used data from the bioaccumulation study as the default for certain freshwater fish. The use of various default assumptions adds uncertainty to the exposure estimates. EPA officials said that EPA did not use more current dietary intake data from USDA in the October 2001 draft reassessment because EPA has not yet fully reviewed surveys subsequent to the 1989-91 surveys that it uses in its Exposure Factors Handbook. EPA officials told us that they did not believe it was necessary to use more current data because the dietary habits of Americans have changed very little over the course of the past decade. These officials cited data collected in surveys conducted between 1994 and 1996 that show little change in the intake of the 10 foods compared with surveys conducted between 1989 and 1991. EPA and WHO have undertaken extensive efforts to reassess the health risks of exposure to dioxins. EPA’s comprehensive dioxin reassessment objective has been to characterize the potential human health risks posed by exposure to dioxins. To do this, EPA used an extensive, multiyear review process. In contrast, WHO had more narrowly focused primary objectives and conducted its reassessments of dioxins through a succession of individual reviews and meetings. Nonetheless, EPA and WHO used very similar analytical methods to identify the types of potential human health hazards associated with exposure to dioxins and assess the probability and severity of harm given different levels of exposure. Moreover, the conclusions EPA and WHO reached on the basis of their respective reassessments also reflected much agreement. However, there were some significant issues on which EPA and WHO differed, such as whether there are threshold doses of dioxins to which humans could be exposed over a lifetime without significant risk of cancer and whether dioxins other than TCDD are human carcinogens. In general, both EPA and WHO focused their evaluations of the health effects and risks associated with dioxins on TCDD and 28 other related chemical compounds (including 12 dioxin-like PCBs) for which consensus toxic equivalency factors had been established through a 1997 meeting organized by WHO. However, there were important differences in some of the specific objectives of EPA’s and WHO’s dioxin reassessments and the processes used by EPA and WHO to develop the reassessments. EPA’s overall objective has been very broad: to characterize the available scientific information on the potential health risks posed by exposures to dioxins. EPA therefore addressed each of the four major components of a chemical risk assessment: hazard identification, dose-response assessment, exposure assessment, and risk characterization. The resulting characterization of risks posed by dioxins can be used to inform risk management decisions, such as whether and where to set or revise regulatory standards, but other information and factors would also enter into such decisions. The process by which EPA has undertaken this task has been a comprehensive, multiyear review. Moreover, the EPA reassessment has included multiple independent scientific peer reviews of various draft reports by EPA’s Science Advisory Board and others. EPA has also solicited public review and comments on its draft reassessment. update. See Martin Van den Berg, et al., “Toxic Equivalency Factors (TEFs) for PCBs, PCDDs, PCDFs for Humans and Wildlife,” Environmental Health Perspectives (1998): Vol. 106, No. 12: 775-792.) integrated process such as EPA’s, WHO’s process consisted of individual evaluations and meetings for each of those particular objectives. In 1997, the International Agency for Research on Cancer (IARC), a chief contributor to WHO’s dioxin risk assessments, published monographs covering TCDD and 16 other dioxins. (This agency publishes the results of its evaluations of specific chemicals in its series IARC Monographs on the Evaluation of Carcinogenic Risks to Humans. In the rest of this report, we call the monographs covering TCDD and other dioxins the 1997 cancer monographs.) The primary objective leading to these monographs was to classify TCDD and other specific dioxins under a standard scheme that identifies whether and under what circumstances substances are human carcinogens. Essentially, this objective corresponds to the hazard identification step of EPA’s four-step risk assessment process. Under its activities related to the European Centre for Environment and Health, WHO organized two meetings of experts addressing issues on the health effects of dioxins. In June 1997, WHO convened experts in Stockholm, Sweden, to derive consensus toxic equivalency factors for 29 dioxins that could be used for human, fish, and wildlife risk assessments. In May 1998, WHO convened 40 experts from 15 countries in Geneva, Switzerland, to evaluate scientific data on the health risks and exposures of dioxins with the principal objective of updating the estimated amount of dioxins to which humans can be exposed daily without appreciable harm. In the rest of this report, we call these efforts, respectively, the 1997 TEF meeting and the 1998 consultation. At the 57th meeting of the Food and Agriculture Organization of the United Nations/WHO Joint Expert Committee on Food Additives in June 2001 in Rome, Italy, the committee for the first time evaluated the risks associated with the presence of dioxins in food. The participants specifically evaluated dioxins (among other specific food additives and contaminants), with the view toward recommending acceptable intakes for dioxins contained in foods. The committee used the 1998 consultation’s assessment as the starting point for its evaluation but took into account newer studies. In the rest of this report, we call this evaluation the 2001 food additives meeting. Appendix I highlights some of the major milestones in the EPA and WHO assessments of dioxin risks, with a particular focus on the reassessment efforts that both entities began in the 1990s. Despite differences in some of the specific objectives and processes of their respective reassessment efforts, EPA and WHO used similar analytical methods to identify and assess the potential health risks of dioxins. Through these analyses, EPA and WHO identified the types of potential hazards that might be associated with exposure to dioxins, the circumstances under which these substances could cause adverse effects, and the probability and severity of expected effects given different levels of exposure to dioxins. Specifically, both EPA and WHO reviewed available scientific data from many studies of humans and animals covering a variety of effects potentially associated with exposure to dioxins; continued to consider cancer risks, as in the original dioxin risk assessments, but also paid increasing attention to noncancer health effects, such as changes in reproductive and developmental functions and the immune and nervous systems, as well as other health problems, such as chloracne (a chronic and disfiguring skin disease) and alterations in liver enzyme levels; reviewed evidence regarding other biochemical, molecular, or cellular effects that have been observed in various studies, agreeing that these effects might be precursors to subsequent adverse effects; and considered a range of analytical methods, models, and approaches to assess the dose-response relationships for exposure to dioxins. EPA and WHO also used some analytical concepts and methods that they agreed were more appropriate to the analysis of dioxins than those that are often used for risk assessments of other chemicals. For example, both entities used body burden—the concentration of dioxins in the body— instead of other dose measures, such as daily intake, to compare risks between humans and animals and determine doses that would be of equivalent risk in humans and animals. The organizations also concurred that the concept of toxic equivalency should be used to facilitate risk assessment of dioxins and complex mixtures of dioxins. Furthermore, in contrast to chemical risk assessments in general, EPA and WHO often had sufficient data to focus on the dose level associated with a 1-percent increase in a particular effect (rather than being limited to the level associated with a 10-percent increase) and seldom had to extrapolate outside the observed doses or exposures from the studies that they used to prepare the reassessments. Much of the scientific data available to EPA and WHO on the potential effects of exposure to dioxins came from animal studies, mainly studies of TCDD on a variety of species. (According to WHO, most other dioxins and dioxin-like compounds are “relatively poorly studied” compared with TCDD.) However, EPA’s and WHO’s recent reassessment efforts also benefited from the increasing quantity and quality of data on the effects of dioxins in humans that became available during their reassessments. Among the sources of these human data were studies of occupational exposure of people who produce and apply herbicides; residents in a contaminated area of Seveso, Italy (where an accident at a chemical factory had released a cloud of toxic chemicals, including dioxins, in 1976); and noncancer effects in infants and children. The conclusions EPA and WHO reached on the basis of their respective dioxin reassessments were frequently similar, but some significant differences also emerged. With respect to the major areas of agreement, both EPA and WHO concluded that TCDD is a human carcinogen and that dioxins can cause a variety of both cancer and noncancer health effects, dioxins act in the same way within the body to cause the effects observed in animals and humans, dioxins adversely affect human health at lower exposure levels than some effects could occur at or near the levels to which the general population is now being exposed. EPA and WHO not only concurred at the broad level of these conclusions but also on many of the supporting details. For example, both entities had similar reasons for concluding that TCDD is a human carcinogen: the combination of sufficient evidence that TCDD causes cancer in animals, more limited evidence of carcinogenicity from human data, and strong evidence that TCDD operates through the same mode, or mechanism, of action in animals and humans. The major differences of opinion between EPA and WHO concerned whether (1) there is a threshold below which exposure to dioxins would not be expected to cause cancer, (2) it is useful to calculate a “tolerable” dose of dioxins or estimate a dose without appreciable risk of deleterious effects to which humans can be exposed over a lifetime, and (3) both mixtures of dioxins and dioxins other than TCDD are likely human carcinogens. In addition, EPA quantified the general population’s possible additional risk of developing cancer from exposure to dioxins, while WHO did not. Such differences may make it more difficult for interested parties to compare the results of EPA and WHO dioxin risk assessments. The following sections provide additional information on each of these differences. EPA and WHO disagreed about whether there is a threshold below which exposure to dioxins would not cause cancer. EPA concluded that available evidence was insufficient for the agency to depart from its default linear cancer risk assessment approach, which is based on an assumption that no threshold exists regarding adverse effects (i.e., any exposure to carcinogenic substances, no matter how small, poses some risk of developing cancer). In contrast, WHO concluded that there is a threshold for all adverse effects, including cancer. Specifically, WHO concluded that dioxins do not initiate cancer through a direct effect on genetic material (that is, they are non-genotoxic carcinogens) and, therefore, do not warrant a linear (no threshold) assessment of risk. WHO also concluded that noncancer health effects occurred at lower body burdens (concentrations) of dioxins than the body burdens at which cancer occurred in animals. Accordingly, WHO determined that establishing a tolerable intake based on estimated thresholds for noncancer effects would also address any cancer risks (that is, if the intake were set to avoid appreciable noncancer health consequences, it should also avoid appreciable consequences concerning cancer). WHO programs estimated a tolerable daily intake for dioxins in 1998 and a tolerable monthly intake in 2001. These measures represent the amounts of dioxins that the WHO experts believe a human could ingest daily or monthly for a lifetime without appreciable health consequences. Expressing these estimates as “tolerable” intakes generally does not connote that such intakes are acceptable or risk free, but rather that any health consequences would be judged to be tolerable while exposure is continuing to be reduced. EPA’s related (but not identical) measure is the reference dose, which would estimate a daily exposure to the human population, including sensitive subgroups, that is likely to be without an appreciable risk of deleterious effects during a lifetime. EPA, however, chose not to calculate a reference dose for dioxins, as it generally does for noncancer health assessments of other substances. According to EPA, it did not do so because any reference dose that it would recommend for dioxins would likely be below (perhaps considerably below) the current background intake levels and body burdens of the U.S. population. EPA pointed out that reference doses are typically calculated to address the risks of incremental exposures over background exposure. In the case of dioxins, however, background exposure is a significant component of total exposure. Therefore, in EPA’s opinion, a reference dose would be uninformative to risk managers for safety assessment. EPA also noted that, if it were to set a reference dose, its estimate likely would be more stringent than the tolerable intake levels for dioxins proposed by WHO because EPA’s traditional approach for setting a reference dose gives more weight to scientific uncertainties than the approach WHO used in setting its tolerable intake level. EPA chose instead to use an alternative approach, the margin of exposure, to characterize noncancer risks. The margin of exposure is a ratio that shows how far the actual (or estimated) total human exposure to a particular substance is from levels at which adverse effects have been demonstrated to occur in human or animal studies. The margin of exposure is an alternative way of characterizing the likelihood that noncancer effects may be occurring in the human population at environmental exposure levels. A reference dose, on the other hand, estimates a level of exposure below which EPA considers it unlikely that any adverse effects will occur. EPA generally considers margins of exposure of 100 or more as adequate to rule out the likelihood of significant effects occurring in humans. However, for the most sensitive effects identified with dioxins (i.e., those that occurred at the lowest doses of exposure), the margins of exposure ranged from 15 to less than 1. EPA and WHO both characterize TCDD as carcinogenic to humans. While EPA further characterizes other individual dioxins and mixtures of dioxins as “likely to be human carcinogens,” WHO does not. Specifically, WHO states that the carcinogenicity of dioxins other than TCDD cannot be determined because of insufficient data. This difference of opinion largely reflects the specific objectives and scopes of EPA’s and WHO’s assessments. EPA’s conclusion reflects a “weight of the evidence” judgment—that is, it is based on EPA’s entire reassessment of dioxins (resting, in particular, on the conclusion that all dioxins share a similar mode of action and using evidence from both animal and human studies). In contrast, WHO’s cancer monographs looked only at individual dioxins, focusing on whether they met specific criteria. Consequently, WHO’s conclusions reflected a narrower data set and did not address the risks posed by mixtures of dioxins. However, because most human exposure is to mixtures rather than individual dioxins, and both EPA and WHO advocate using the same toxic equivalency factors for assessing the dioxins in such mixtures, any differences in the carcinogenicity classifications may have little practical impact. Quantifying the lifetime cancer risk to the general population from exposure to dioxins was an important component of EPA’s dioxin reassessment. EPA estimated that the upper bound on the general population’s lifetime risk for all cancers from dioxins might be on the order of 1 in 1,000 or more (i.e., people might experience a 1 in 1,000 increased chance of developing cancer over their lifetime because of exposure to dioxins). EPA’s reassessment also states that the vast majority of the population is expected to have less risk per unit of exposure and some may have zero risk. WHO did not carry out such a quantitative assessment of the general population’s cancer risk for two main reasons. First, calculations of population risk are beyond the scope of WHO’s IARC cancer monographs, which evaluate whether and under what circumstances particular substances could pose a cancer risk to humans but generally do not provide quantitative risk estimates. Second, as noted previously, WHO’s conclusion about a cancer threshold for dioxins led it to focus on noncancer effects when deriving tolerable intake levels for dioxins. However, WHO did explore the calculation, through modeling, of a cancer “benchmark dose,” the dose or body burden estimated to result in a 1-percent increase in cancer mortality. But WHO noted that its estimates for this benchmark dose ranged quite widely and strongly depended on the assumptions made during the modeling. Appendix II provides a more detailed comparison of the EPA and WHO conclusions regarding a number of major issues covered by the entities’ dioxin risk assessments. Two independent peer review panels, including an EPA Science Advisory Board panel, reviewed major sections of EPA’s draft dioxin reassessment report in 2000. Both panels generally agreed with a number of key assumptions and approaches that EPA used to develop its updated health risk assessment of dioxins. Each of the peer review panels had a number of recommendations and suggestions for EPA to address or consider, most of which focused on the approaches and methodologies used to depict the health risks associated with dioxins. EPA made a number of revisions to its draft report in response to these recommendations and comments. The peer review panels disagreed with EPA on a few major points, and the Science Advisory Board panel emphasized the need for additional research to bridge gaps in data. Both an independent expert peer review panel and one convened by EPA’s Science Advisory Board reviewed the draft reassessment report on dioxins in 2000. These reviews resulted in part from the Board’s review of an earlier version of EPA’s draft reassessment report. In 1995, a Board panel had reviewed the draft reassessment and requested that EPA make substantive revisions to the chapter on dose-response modeling and to the Integrated Summary. The Board had also requested that EPA develop a separate chapter on toxicity equivalence factors and submit the revised dose-response and new toxicity chapters to external peer review before the next Board review of these sections. In response, EPA revised the chapter on dose-response modeling and had it peer reviewed in 1997. Similarly, EPA wrote a chapter on toxicity equivalence factors and had it peer reviewed as part of the July 2000 review. In July 2000, EPA organized an independent peer review panel to review the revised Integrated Summary and the new chapter on toxicity equivalence factors. To obtain an objective critique, EPA had a contractor select 12 independent individuals with expertise in several technical fields, including risk characterization and communication; toxicology; epidemiology; sources of, and population exposure to, dioxins and related compounds; mechanisms and mode of action; and toxic equivalency. The panel addressed 20 questions about the reassessment report regarding exposure to and the health risks of dioxins. Table I of appendix III lists the questions the July 2000 panel addressed in its review. The panel generally agreed with the approaches and methodologies EPA used in its reassessment, and noted, among other things, the following: Body burden—the concentration of dioxins in the body—is an appropriate “dose metric” (measure) for comparing health risks across species. The use of margin of exposure—a ratio that shows how far actual or estimated human exposure is from levels at which adverse effects have been demonstrated to occur in human or animal studies— is a more logical approach to characterizing noncancer risk of dioxins than comparing exposure to a reference dose. The report’s information on noncancer effects in animals and humans was adequately assembled, and the explanation of why dioxins’ effects observed in animals are of concern to humans was also sufficient. The history, rationale, and support for the toxicity equivalence approach, which is used to assess risks posed by dioxins and complex mixtures of dioxins on the basis of their toxicity relative to an equivalent dose of TCDD, were adequately presented. As discussed further below, the July 2000 panel also provided several recommendations and suggestions and identified the topics of greatest concern for finalizing the Integrated Summary. Once the July 2000 panel published its recommendations and suggestions in August 2000, EPA addressed them and sent its revised draft to the Science Advisory Board’s dioxin reassessment review subcommittee panel in September 2000. The panel comprised several professors and directors employed by medical institutions and representatives of industry-affiliated research organizations, consulting firms, and state health agencies. The Board panel met to review the revised sections of the draft reassessment report in November 2000. The Board agreed to answer 20 questions on the reassessment report regarding exposure to and the health risks of dioxins. Most of these questions were similar to those asked of the July 2000 panel. The Board panel completed its review and published a report in May 2001. Table 2 of appendix III lists the questions the Board panel addressed in its review. The Board panel, as the July 2000 panel before it, endorsed several key aspects of the reassessment, noting that, among other things, EPA had used appropriate dose metrics, such as body burden, to equate risks assembled and distilled a large and diverse body of literature on noncancer effects into a coherent document; properly chosen the margin-of-exposure approach to characterize used toxicity equivalence factors to effectively address the joint effects of complex mixtures of dioxins on human health; and compiled an outstanding inventory of dioxin sources and effectively characterized the estimates of background exposure to dioxins using the available scientific data. The Board panel stated that, overall, EPA had prepared a thorough and objective summarization of the data and had addressed the key issues the Board had set forth in its 1995 review of the draft. The Board panel concluded that there was no need to submit further revisions of the reassessment report and that EPA should proceed to complete and release the document. However, as discussed in the following section, the Board panel provided several recommendations and suggestions for EPA to improve the draft document before its release. The Board panel also recognized the need for additional research to bridge gaps in data that limit EPA’s ability to determine the magnitude of the health risks associated with dioxins. In essence, the Board panel viewed this reassessment as an interim assessment, recognizing that the data gaps are not likely to be addressed in the foreseeable future. While the peer review panels generally agreed with the methodologies and approaches used by EPA, they made a number of recommendations and suggestions, and the Board asked specifically that the agency either address them before this reassessment is released in 2002 or in a future assessment of dioxins. The panels’ recommendations generally reflected either a consensus of the panelists or the opinion of a majority. EPA generally addressed the panels’ recommendations and suggestions by performing additional analyses, adding or revising text, identifying the recommendations or suggestions as related to EPA’s long-term research goals, or indicating that the data currently available are not adequate to address the recommendation or suggestion. Additional changes are now being made as EPA prepares the draft for external interagency review. Four of the five recommendations by the July 2000 panel regarded improvements EPA could make to the section on health risks associated with dioxins. The July 2000 panel recommended that EPA explicitly explain the relationship between body burden and daily intake, serum levels, and tissue dose; include a table in the final reassessment report summarizing the various noncancer effects observed in animals and humans at low-level exposures; improve the methodologies used in determining the cancer risks of dioxins—such as requesting more detail on exactly how the cancer slope factor for estimating cancer risks of the general population was derived; and reexamine the basis for its estimate of the upper bound cancer risks to the general population. The fifth recommendation of the July 2000 panel involved the use of specific terminology in the exposure section. In addition, this panel had several suggestions regarding the health risks associated with dioxins, including that EPA provide more detail in the Integrated Summary on the implications of using the margin-of-exposure approach rather than comparing exposure with reference doses; more clearly describe the significance of the upper bound cancer risks to add discussion of the uncertainties associated with using various dose metrics specifically for evaluating childhood risks. Ten of the 13 recommendations made by the Board panel also focused on the need to improve the section on health risks associated with dioxin. These recommendations included that EPA calculate a reference dose to evaluate risk in addition to using the margin- of-exposure approach to provide information on the minimum dose that humans can receive without suffering harm, improve its margin-of-exposure approach by more clearly explaining its choice to use dose levels associated with a 1-percent increase in a particular effect and also by calculating a dose level associated with the 10-percent increase more commonly used in chemical risk assessments, and provide better justification for using a specific dose metric and identify the important data gaps that could affect the results of those choices. Three of 13 recommendations asked that EPA improve the section on exposure to dioxins by evaluating the sources that contribute most to dioxins in the food chain, discussing all “special population” exposure in more detail, and extending breast-feeding exposure scenarios beyond 1 year. EPA made many additions and changes to the draft reassessment in response to the peer review reports by both panels. For example, in response to recommendations from both panels, EPA revised and added text in several places to better explain the variety of dose metrics available and why body burden is the best choice for assessing dioxins, while acknowledging that EPA will need to address data gaps on body burden in the future as further research is completed. Tables 1 and 2 in appendix IV highlight the actions EPA took to address both panels’ recommendations, suggestions, and concerns. Overall, the peer review panels agreed with EPA’s approach to the reassessment, and EPA generally addressed the recommendations, suggestions, and concerns of the peer review panels. In a few cases, EPA disagreed with the panels’ recommendations or suggestions. In these cases, the agency explained its position in the text and, in the case of the July 2000 panel, addressed it in a separate written document. For example, although the Board panel had recommended that EPA calculate a reference dose and add it to the text, EPA chose to continue to use only the margin-of-exposure approach and not calculate a reference dose. EPA stated in the revised draft report that a calculated reference dose would be lower than most people’s daily exposure and added a more detailed explanation of why it chose to use the margin-of-exposure approach. In addition to disagreeing with EPA on a few key scientific issues, the peer review panels could not agree among themselves in some cases on EPA’s findings. In such cases, the panels refrained from making recommendations or suggestions to the agency. For example, members of both peer review panels did not reach consensus on the strength of evidence used by EPA to support the classification of TCDD as a human carcinogen and other dioxin compounds as likely human carcinogens. EPA officials believe that the weight of scientific evidence on human and animal exposure supports classifying TCDD as a known human carcinogen, a view also held by WHO and the U.S. Department of Health and Human Services. Although neither panel specifically recommended that EPA change its classification of TCDD as a human carcinogen to a lesser category, such as a likely human carcinogen, for various reasons most of the peer reviewers did not endorse EPA’s classification. For example, while the July 2000 panel agreed that TCDD is clearly a potent carcinogen in many species of animals, most of the panel thought that human epidemiology studies were too limited, and the results not consistent enough, to serve as a basis for showing increased cancer mortality. As a result, the majority felt that the characterization of TCDD as a known human carcinogen was not justified. Similarly, the Board panel also noted limitations in the scientific data, questioning the epidemiological data that indicated dioxins are carcinogens in humans, as well as the data that supported similar modes of action occurring in both animals and humans. Almost one-half of the Board did not support classification of TCDD as a known human carcinogen for various reasons. Those who did support the classification believed that the results from studies of TCDD-exposed workers were persuasive and that the variety of studies from researchers in different countries provided limited but convincing evidence of TCDD’s carcinogenicity in humans. A decade in the making, EPA’s draft reassessment report on dioxins was both improved and limited by the passage of time, particularly in estimating the daily dietary intake of dioxins by the typical American adult. That is, EPA was able to include new food studies in the reassessment as they became available. At the same time, however, these and earlier studies that EPA relied on became less current with the passage of time. Overall, while EPA’s draft reassessment report has advanced the state of knowledge on dietary exposure to dioxins in the United States, the extent to which the estimate accurately reflects current average daily exposure is not known. EPA acknowledges the need for additional research on dietary intake, identifying a number of data limitations associated with the estimates it developed in its October 2001 draft report. Future efforts could eliminate most of the food data limitations of the reassessment. Such efforts could include periodic, comprehensive food surveys that analyze samples of the most commonly eaten food products in each type of food studied, with samples collected within the same time frames and analyses performed using standardized methodologies. Further, when they become available, the results of the ongoing EPA/USDA follow-up study on dioxin levels in beef, pork, and poultry should provide quantitative information on the changes, if any, in dioxin levels in these foods from the mid-1990’s to the present. We provided EPA with a draft of this report for its review and comment and the draft segment comparing EPA’s and WHO’s assessments of dioxins to WHO. In commenting on the draft report, EPA’s assistant administrator, Office of Research and Development, said that the report was well researched and written and provided a balanced treatment of the information. However, EPA believed that additional information on some of the data limitations discussed in the section on EPA’s estimates of the dietary intake of dioxins would better enable readers to evaluate the impact of the data limitations. Where appropriate, we revised the report to reflect the views EPA presented in its comments. For example, we added information concerning the strength of the food concentration data used in estimating national mean levels of exposure to dioxins, the sampling of animal fat rather than meat and poultry products sold in grocery stores, and the likelihood that current dioxin levels in food have significantly declined since the mid-1990s. EPA’s comments and our evaluation of them are provided in appendix V. In commenting on the draft segment comparing EPA’s and WHO’s analyses, a senior advisor of health and environment, the Department of Protection of the Human Environment, World Health Organization, said the report was well written and accurate. To describe the types and extent of data EPA used to reassess human dietary exposure to dioxins in the United States, we reviewed the relevant portions of the October 2001 draft reassessment, the 1994 and 2000 drafts that were peer reviewed, and the initial 1985 health risk assessment. We also reviewed EPA documents and journal articles on the agency’s national sampling of beef, pork, and poultry samples, and information about the other samples used for milk, eggs, fish, dairy products, and vegetable fats. We discussed the samples and methodology issues about them with EPA officials and contractor staff. We did not validate or verify EPA’s estimates of dietary exposure to dioxins. To compare EPA’s objectives, processes, analytical methods, and conclusions with those of WHO, we analyzed EPA’s October 2001 draft reassessment report and various WHO publications on its objectives, analyses, and conclusions. We discussed the similarities and differences with EPA and WHO officials. To determine the extent to which EPA’s draft dioxin reassessment reflects the views of two independent peer review panels, we analyzed the recommendations, suggestions, and concerns in the reports by the EPA Science Advisory Board’s dioxin reassessment review subcommittee panel—on reviews performed in 1994 and 2000—and a report from another independent peer review panel on its July 2000 review. Recommendations of the Board panel were noted in bold print in the executive summary, and we considered other statements to be “suggestions” when they were the consensus opinion of the panelists or the opinion of a majority or of some of the panelists. We considered the July 2000 panel’s statements to be “recommendations,” “suggestions,” or “concerns,” when those particular words were used in the executive summary and where the statements reflected either a consensus or the opinion of a majority or of some of the panelists. We also reviewed EPA documentation to determine the changes EPA has made to its draft reassessment as a result of being peer reviewed, including comparing the agency’s previous drafts of the reassessment with each other and reviewing the written responses to the July 2000 panel’s recommendations and suggestions. We also met with EPA officials to identify the agency’s responses to the panels’ recommendations, suggestions, and concerns, including discussing those with which it disagreed. We conducted our work from July 2001 through March 2002 in accordance with generally accepted government auditing standards. We will send copies of this report to the administrator, EPA, and make copies available to others who request them. This report will also be available on GAO’s Web site (www.gao.gov). If you or your staff have questions about this report, please call me on (202) 512-3841. Key contributors to this report are listed in appendix VI. Appendix I: Major Milestones in the EPA and WHO Dioxin Risk Assessment Efforts (1991) reassessment. reassessment. EPA completes EPA completes external review external review meetings and workshops. workshops. interim toxic interim toxic equivalency equivalency factor (TEF) factor (TEF) by dioxins. by dioxins. holds public meetings. meetings. 1992 - EPA procedures for estimating risks estimating risks associated with associated with convenes two peer- review workshops. review workshops. dioxins. dioxins. mixtures of dioxins. mixtures of dioxins. of external review draft. review draft. holds a third peer- holds a third peer- review workshop. review workshop. (1987) (1987) Experts at Experts at meeting in meeting in Bilthoven, Bilthoven, monographs on monographs on carcinogenicity carcinogenicity of some dioxins. of some dioxins. intake for TCDD. TCDD. review. review. Science Advisory Science Advisory Board completes Board completes pending. final internal final internal review draft review draft report, which report, which incorporates incorporates revisions in revisions in response to pending. Board and an independent independent panel of peer panel of peer reviewers review reviewers review major segments major segments of EPA’s revised of EPA’s revised peer review panel peer review panel recommendations recommendations draft report. draft report. and comments. and comments. is pending. is pending. Stockholm meeting. Stockholm meeting. in Rome, Italy, in Rome, Italy, evaluate risks evaluate risks associated with associated with dioxins in foods dioxins in foods and establish a and establish a in Geneva, Switzerland, Switzerland, reevaluate the reevaluate the risks to human risks to human dioxins. dioxins. dioxins. dioxins. Sweden, derive consensus TEFs daily intake daily intake for dioxins. for dioxins. for dioxins for for dioxins for human, fish, human, fish, and wildlife risk assessments. assessments. Exposure to dioxins can produce a wide variety of effects in animals (including cancer and noncancer health effects) and might produce many of the same effects in humans. Exposure to dioxins may be linked to a variety of adverse effects. Short-term human exposure to high levels of dioxins may result in skin lesions (such as chloracne) and altered liver function. EPA characterizes dioxin and related compounds as carcinogenic and developmental, reproductive, immunological, and endocrinological hazards and makes the following specific points. Long-term exposure is linked to impairment of the immune system, the developing nervous system, the endocrine system, and reproductive functions. Exposure to TCDD leads to an increased risk of generalized cancers at multiple organ sites, including lung cancer. Long-term noncancer consequences of exposure to TCDD in types of cancer. Human data from occupational or accidental exposure has produced evidence of increased risks for all cancers combined, along with less strong evidence of increased risks for cancers of particular sites. adults include chloracne, elevated gamma glutamyl transferase levels, and altered testosterone levels. Among the possible noncancer consequences of exposure to TCDD or other dioxin and dioxin-like compounds are dermatological conditions such as chloracne; liver diseases; and kidney, nervous system, and lung disorders. Although available data suggest an association between TCDD exposure and other adverse outcomes, further study is required of circulatory and heart disease, diabetes and glucose metabolism, reproductive and developmental outcomes, and immunologic disorders. Mode of action through which exposure to dioxins can lead to adverse effects Dioxins are structurally related and elicit their effects through a common mode of action—binding of dioxins to a cellular protein called the aryl hydrocarbon receptor. Binding to the aryl hydrocarbon receptor appears to be necessary for all well-studied effects of dioxins but is not sufficient, in and of itself, to elicit these responses. A broad variety of data has shown the importance of the aryl hydrocarbon receptor in mediating the biological effects of dioxins. The precise chain of molecular events by which the receptor elicits these effects is not yet fully understood. However, alterations in key biochemical and cellular functions are expected to form the basis for dioxin toxicity. TCDD and related compounds have a common mode of action in animals and humans. Therefore, there is no reason to expect, in general, that humans would not be similarly affected as animals at some dose. Experimental data indicate that TCDD and probably other polychlorinated dibenzo-p-dioxins (CDD) and polychlorinated dibenzofurans (CDF) are not direct-acting genotoxic agents (i.e., do not directly affect genetic material). Dioxins act through the same mode of action in animals and humans. Use of the toxicity equivalency (TEQ) concept EPA and the international scientific community have adopted TEQ of dioxins as prudent science policy. The complex nature of CDD, CDF, and polychlorinated biphenyls (PCB) mixtures complicates the risk evaluation for humans. The concept of TEFs has been developed to facilitate risk assessment and regulatory control of exposure to these mixtures. (EPA recommended that the TEFs derived by WHO in 1997— published in 1998—be used to assign TEQ to complex environmental mixtures for assessment and regulatory purposes.) (WHO derived updated consensus TEFs for 29 dioxins in 1997, with the results of the meeting published in 1998. Subsequent WHO assessments of dioxins used this updated set of TEFs for their calculations.) Whether dioxins are human carcinogens Complex mixtures of dioxins are highly potent, “likely” human carcinogens. TCDD is a human carcinogen (group 1), considering limited evidence in humans, sufficient evidence in experimental animals, and evidence of a mode of action that functions the same way in humans as in experimental animals. dioxins are strong cancer promoters and weak direct or indirect initiators and are likely to present a cancer hazard to humans. Because dioxins and related compounds always occur in the environment and in humans as complex mixtures of individual congeners, it is appropriate that the characterization apply to the mixture. Individual congeners can also be characterized as to their carcinogenic hazards. Other dioxins are not classifiable as to their carcinogenicity to humans (group 3). Depending on the specific compound evaluated, the International Agency for Research on Cancer (IARC) noted that the available data provided inadequate evidence for carcinogenicity in humans or limited evidence, inadequate evidence, or evidence suggesting lack of carcinogenicity in experimental animals. TCDD is best characterized as “carcinogenic to humans.” Based on the weight of all evidence (human, animal, and mode of action), TCDD meets the criteria that allow EPA and the scientific community to accept a causal relationship between TCDD exposure and cancer hazard. Other individual dioxin-like compounds are characterized as “likely to be human carcinogens” primarily because of the lack of epidemiological evidence associated with their carcinogenicity, although the inference based on TEQ is strong that they would behave in humans as TCDD does. Other factors, such as the lack of compound-specific chronic animal studies, also support this characterization. Whether there appears to be a “threshold” or safe dose of dioxins that would not cause adverse effects The supposition of a response threshold for receptor-mediated effects (such as those associated with dioxins’ binding to the aryl hydrocarbon receptor ) is a subject for scientific debate. The same receptor occupancy assumption of the classic receptor theory is interpreted by different parties as support for and against the existence of a threshold. TCDD does not affect genetic material, and there is a level of exposure below which cancer risk would be negligible. Although TCDD is classified by IARC as a human carcinogen, it is not considered to be a direct acting carcinogen. Therefore, a threshold approach could be used in the hazard assessment approach. Empirical dose-response data from cancer studies do not provide consistent or compelling support for threshold models and are insufficient to move from EPA’s default policy of linear extrapolation (an approach that assumes there is no threshold of exposure without risk). A tolerable intake can be established for TCDD on the basis of the assumption that there is a threshold for all effects, including cancer. Because cancer occurred in animals at higher body burdens than other toxic effects, establishing a tolerable intake on the basis of noncancer effects would also address any carcinogenic risk. Threshold levels of lifetime exposure to dioxins that would cause toxic noncancer effects may be below the current level of background exposure and body burdens, and, therefore, the potential exists for noncancer risk at background exposure. Whether it is useful to set a dose or exposure level that the public could experience for a lifetime without expectation of harm EPA did not calculate reference dose or reference concentration values in this reassessment as it generally does for noncancer effects in other assessments. Instead, EPA chose to characterize the margins of exposure between estimated actual human exposure and the exposure levels at which studies indicated various adverse noncancer effects could occur. The WHO 1998 consultation set daily limits on exposure levels of dioxins for non-cancer effects, a tolerable daily intake. The Joint Expert Committee on Food Additives of the United Nation’s Food and Agriculture Organization and WHO set a provisional tolerable monthly intake limit on exposure levels to dioxins, again focusing on noncancer effects. The Committee participants felt that it was more appropriate to express the tolerable intake on a monthly rather than a daily basis because of the long half-life of dioxins (i.e., the body’s stored dioxins decline slowly, with only half of the accumulated dioxins disappearing over about 7 years). Human exposure to dioxins has occurred through background exposure, contamination of foods, occupational exposure, and exposure associated with industrial accidents. An increased background exposure can result from either a diet that favors consumption of foods high in dioxin content or a diet that is disproportionately high overall in animal fats. Human exposure to dioxins may occur through background (environmental) exposure and accidental and occupational contamination. Over 90 percent of human background exposure is estimated to occur through the diet, with food from animal origin being the predominant source. Most (more than 95 percent) background exposure results from the presence of minute amounts of dioxins in dietary fat, primarily from the commercial food supply. Recent studies show decreasing levels of dioxins in food and consequently a significantly lower dietary intake of these compounds. The average dioxin tissue level for the general U.S. adult population appears to be declining. Five compounds account for most (about 80 percent) of the toxicity in human tissue concentrations. Risks of adverse health effects at the general public’s current levels of exposure to dioxins In general, EPA’s assessments indicated that dioxins pose risks at lower levels of exposure than previously estimated and that the general public’s current levels of exposure are at or near those that have been observed to cause harm. In general, WHO’s assessments also indicated that dioxins pose risks at lower levels of exposure than previously estimated and that the general public’s current levels of exposure are at or near those that have been observed to cause harm. EPA estimates that the upper bound cancer risk at average current background body burdens exceeds 10-3 (i.e., the upper bound on general population lifetime risk for all cancers might be on the order of 1 in 1,000 or more). However, this is an upper bound estimate, so the true risks are likely less than that and may be zero for most people. In 1985, EPA’s estimate of the cancer slope factor based on exposure to TCDD was 1.6 x 10-4 per picogram of TCDD per kilogram of body weight per day (pgTCDD/kgBW/day). In 1990, WHO experts had established a tolerable daily intake for TCDD of 10 picograms per kilogram of body weight. In 1998, the WHO consultation established a tolerable daily intake for dioxins at a range of 1-4 TEQ picograms per kilogram of body weight and noted that subtle effects may already occur in the general population at current background levels of 2 to 6 picograms per kilogram of body weight. The consultation stressed that the ultimate goal is to reduce human intake levels below 1 picogram TEQ per kilogram of body weight per day. EPA’s current upper bound slope factor for estimating human cancer risk on the basis of human data is 1 x 10-3 per pgTCDD/kgBW/day. EPA’s current upper bound slope factor for estimating human cancer risk on the basis of animal data is 1.4 x 10-3 per pgTCDD/kgBW/day. In 2001, Joint Expert Committee on Food Additives of the United Nation’s Food and Agriculture Organization of the United Nations and WHO determined that a monthly tolerable intake level made more sense than a daily level and established a provisional tolerable monthly intake of 70 picograms per kilogram of body weight per month (equivalent to 2.33 picograms per day) for dioxins. EPA estimated that U.S. residents are exposed daily to about 1 picogram of dioxins per kilogram of body weight, which is close to the level that caused biological changes in animals. EPA noted that the margins of exposure between estimated actual human exposure and the exposure levels at which studies indicated adverse noncancer health effects could occur were “considerably less than typically seen for environmental contaminants of toxicologic concern.” The various WHO entities did not calculate quantitative cancer risk estimates for the additional cancer risk that dioxins might pose to the general population. However, WHO did explore the calculation of a cancer “benchmark dose” (the dose or body burden estimated to result in a 1-percent increase in cancer mortality) through various models. On the basis of data from three industrial exposure studies, WHO estimated that the body burden of dioxins associated with a 1-percent excess cancer risk over a lifetime was 3 to 13 nanograms per kilogram of body weight, which is associated with a daily dose of dioxins in the range of 2 to 7 picograms per kilogram of body weight per day. Children’s risks from dioxins and related compounds may be greater than that of adults, but more data are needed to fully address the issue. Certain population subgroups are at greater risk from dioxins. Fetuses are most sensitive to dioxin exposure, and newborns may also be more vulnerable to certain effects. Some individuals or groups of individuals may be exposed to higher levels of dioxins because of their diets or occupations. There may be individuals in the population who might experience a higher cancer risk on the basis of genetic factors or other determinants of cancer risk not accounted for in epidemiologic data or animal studies. In particular, a very small percentage of the population (less than 1 percent) may experience risks that are 2 to 3 times higher than the general population estimate if their individual response is at the upper bound and they are among the most highly exposed based on dietary intake of dioxins. EPA sought expert opinions from both a July 2000 panel of independent peer reviewers and a November 2000 Science Advisory Board expert panel on several key questions that pertain to the content of the documents under review. The questions are classified into 11 general topics. Most of the questions are the same for both panels. However, according to usual Science Advisory Board practice, EPA staff, Board staff, and the chair of the Board’s dioxin reassessment review subcommittee jointly developed additional questions for the Board’s review. Table 6 and table 7 show the topics and questions addressed by the July 2000 panel, and Board panel, respectively. EPA generally addressed the peer review panels’ comments by performing additional analyses, adding or revising text, or identifying comments as related to EPA’s long-term research goals. In some instances, EPA thought that the reassessment already addressed the panel’s comment. The panels classified their recommendations, suggestions, and concerns, and EPA responded to each. Tables 8 and 9 show the comments made by the panels and EPA’s response or action taken. The following are GAO’s comments on EPA’s letter dated April 17, 2002. 1. The discrepancies we identify between the Integrated Summary and supporting chapters appear in the October 2001 reassessment documents that EPA distributed for internal agency review. We identified them primarily to inform readers of our report of the source of the information we cite. For example, a reader of the Integrated Summary would find (outdated) information on 9 food types, whereas we are citing information on 10 food types that is provided in the supporting chapters of EPA’s reassessment documents and that EPA officials told us is correct. 2. Throughout the section of our report on EPA’s estimate of dietary exposure to dioxins, we attribute the identification of the limitations to EPA’s draft reassessment report. 3. Our report did not characterize the significance of the limitations EPA identified in its reassessment documents. We have added to the report EPA’s opinion that these limitations do not represent major weaknesses in its estimates of dietary exposure to dioxins. 4. The statement in our report that that the available studies generally were not designed to estimate national exposures is derived from page 76 of EPA’s October 2001 Integrated Summary draft. In this document EPA says: “The amount and representativeness of the data vary, but in general these data were derived from studies that were not designed to estimate national background means.” In its written comments, EPA says that most of the dietary exposure it estimated was derived from studies specifically designed to estimate national exposures. In support of this point, EPA says that 66 percent of the estimated exposure to dioxins is from eating beef, pork, poultry, milk, and dairy products, and that these studies were designed to estimate national exposures. (We note that these studies cover 5 of the 10 food types on which EPA based its exposure estimates.) Importantly, our draft report stated that the studies on beef, pork, and poultry were based on the first statistically designed national surveys of dioxin levels in these foods and that the milk samples upon which both the milk and dairy estimates were based came from a national survey with samples collected from sampling stations in a majority of the states. However, while our review of EPA’s milk survey design plan indicated the milk samples were intended to assess the levels of dioxins in the general milk supply of the United States, the survey design document also stated that (1) the milk would be collected from dairy plants around the United States that represent approximately 20 percent of the nation’s milk supply and (2) the survey was not designed to be statistically rigorous—that is, it was not intended to randomly sample milk in such a way that the results could be generalized to the full milk supply with a known degree of precision. Thus, we concluded that EPA’s statement in the Integrated Summary—that the studies covering the 10 food types generally were not designed to estimate national exposures—was accurate. In light of EPA’s comments and the fact that the milk samples used to estimate milk and dairy exposures did have national coverage, we have revised the report to indicate that EPA acknowledges that some of the available studies were not designed to estimate national average exposures. 5. We revised the description of the fat samples from “inedible fat samples” to cuts of fat, such as back fat on cattle, that generally are not consumed by the U.S. public. 6. We understand that there is variability associated with measurements at the picogram level. Nonetheless, we continue to believe that the variability identified among the five samples studied indicates that using fat samples not consumed by the public may overstate or understate to some extent dioxin levels in beef, pork, and poultry products sold to the public. 7. In its comments, EPA stated that it believes that sufficient information is available to support a conclusion that, in spite of the emission reduction of the late 1990s, the exposure estimates of the draft reassessment are a reasonable characterization of contemporary exposure. We have revised the report to include EPA’s opinion and the reasons it cited in support of its view that the emission reduction in the late 1990s does not significantly affect the current estimate of general population exposure. However, because EPA does not have data on dioxin emissions after 1995, we cannot evaluate EPA’s conclusion. 8. EPA stated that it plans to delete information on the variability in dairy concentration data from the reassessment report, and we have therefore deleted this point from our report. 9. We understand that the contamination of the two samples eliminated from EPA’s estimate was found to stem from a localized ball clay contamination. However, we continue to believe that because of the lack of information on the incidence of dioxin contamination in animal feeds as well as on the potential sources of such contamination, it is not clear that the poultry samples with high concentrations of dioxins were anomalies. For example, this animal feed contamination problem was identified as a result of the first national survey of only 80 poultry fat samples. We acknowledge that a decision to exclude apparently anomalous information entails professional judgment. However, because the incidence of contamination of animal feed is unknown, we believe that it is important for users of the dioxin reassessment to understand the judgments EPA made in estimating dietary exposure. 10. In the draft report, EPA does not provide information on the assumptions and analyses used to estimate the average fat percentages for pork and poultry. However, EPA does provide some information on how it estimated the fat percentage for beef. The fat percentage estimates affect the exposure estimates, and we believe this information should be included in the reassessment report. In its comments to us, EPA stated that the agency is considering adding information about the pork and poultry estimates to the report. We are therefore deleting references to this point in our report. 11. We deleted the phrase “assembled by EPA” to be consistent with information we provide in the body of the report that the peer review panelists were selected by an independent contractor. 12. We have revised this statement to reflect the fact that most (rather than all) of the other dioxins have TEFs of 0.1 or lower. 13. We clarified that TEFs apply to all effects, not just those for which relative potency data were available. Other key contributors to this report include Timothy Bober, Greg Carroll, Nancy Crothers, Greg Wilmoth, and Carrie Wheeler.
Dioxins--chemical compounds that share structural and biological characteristics--have been linked to human illnesses, including cancer. Often the byproducts of combustion and industrial processes, complex mixtures of dioxins enter the food chain and human diet through emissions into the air. The Environmental Protection Agency (EPA) and the World Health Organization (WHO) noted the potential human health risks of dioxins in the 1970s when animal studies showed them to be among the most potent cancer-causing chemicals. EPA derived its estimates of human dietary exposure to dioxins in the United States from (1) chemically analyzed samples of 10 food types, (2) toxicity estimates of levels of individual dioxins in these foods, and (3) estimates of the quantities of these foods consumed by Americans. To develop more reliable national estimates of dietary exposure, EPA incorporated into its analysis some food studies that were nationally representative. Although both EPA and the WHO have assessed the human health risks of dioxins during the last decade, some of their objectives and processes have differed. Nonetheless, the analytical methods used and the conclusions reached have much in common. A major difference in the assessments is whether there are threshold levels below which exposure to dioxins would pose a negligible risk of cancer. EPA assumed there is no safe threshold level for cancer effects, but the WHO assumed there is. EPA's draft reassessment report reflects the recommendations and suggestions provided to the agency by the two most recent independent peer review panels. The panels, one consisting of 12 independent reviewers and the other convened by EPA's Science Advisory Board, concurred with many key assumptions and approaches that EPA used.
A range of programs and tax expenditures assist individuals and families. Programs under the jurisdiction of the Subcommittee on Human Resources can roughly be grouped under three missions for children and working-age adults: providing income support, providing child care, and providing child welfare services. Other key programs address other needs of these households, such as Medicaid, housing, nutrition assistance, and Workforce Investment Act (WIA) employment and training programs. These programs fall under the jurisdiction of four other House committees. In addition, a wide array of tax expenditures assist individuals and families in these areas. Figure 1 shows an illustrative set of programs and tax expenditures. ■ Dependent Cre Tx Credit ■ Promoting Safe nd Sable Fmillie■ Workforce Invetment Act progr (WIA) ■ Erned Income Tx Credit (EITC) ■ SupplementSecrity Income (SSI) Various federal agencies are responsible for the oversight of these programs and tax expenditures, as shown in figure 2. In addition, while the federal government is involved in some aspects of the design and funding of each of these supports, state governments are sometimes responsible for directly administering the benefits and services. For example, while SSI is directly administered by federal employees within the Social Security Administration, UI, TANF, subsidized child care, and various other programs are overseen by state governments and directly administered by state and, in some cases, local government employees as well as by nonprofit and for-profit entities. Across some of the programs and tax expenditures under the jurisdiction of the Subcommittee and Committee, key characteristics such as the population eligible for each and funding design vary. (See table 1.) For example, individuals and families are sometimes eligible for specific federal tax expenditures based on their employment or family-related circumstances, such as with an adoption. Further, SSI and TANF both provide monthly cash benefits to low-income people, but for SSI, individuals must be aged, blind, or disabled, and for TANF, a family must include dependent children. In terms of funding design, SSI benefits and the tax expenditures are provided to all who apply and meet eligibility requirements. So too is the case with the EITC, which has a refundable portion for those without enough income to owe income taxes. Similarly, federal funding for monthly payments to support children in foster care, adoption, and kinship guardianship placements is also not capped and is dependent on the number of children eligible for such assistance. On the other hand, the federal funding level is fixed for programs such as TANF and subsidized child care and does not increase with the numbers of eligible people who apply. With this array of human services programs, a family and its members may receive benefits or services from one or more of these programs. Interactions between the programs vary, and in some cases, the programs are specifically designed to provide multiple sources of support for individuals and families. For example, a low-income family may be eligible for and receive income support through TANF, EITC, and Child Support Enforcement, as well as subsidized child care assistance. However, at the same time, another family may be eligible for only one of those supports, such as EITC, due to income or other eligibility requirements. Also due to varying eligibility criteria, a family may have several members who are receiving income support through TANF while another member receives such support through SSI. While these programs provide important supports and services to millions of households each year, they comprise a patchwork of support developed over time and under different circumstances. Some programs were begun under the original Social Security Act passed in 1935 and have evolved over time. Congress has added other programs to meet emerging needs. For example, to encourage more low-income women to move into the workforce, Congress created child care subsidy programs designed to support parents’ work efforts. Today, our work has shown this patchwork of programs to be too fragmented and overly complex—for clients to navigate, for program operators to administer efficiently, and for program managers and policymakers to assess program performance. People seeking aid often must visit multiple offices and provide the same information numerous times. The routes by which people access services varies by program, state, and sometimes locality, and can be cumbersome for those seeking aid from more than one program. Low-income individuals and families often receive aid from multiple programs to meet their income support, health, nutrition, employment and training, and housing needs. Typically, clients may access several programs through one office that administers TANF, the Supplemental Nutrition Assistance Program (SNAP), and Medicaid. However, clients may need visits to other offices to apply for housing assistance and SSI, while they must file a tax return with the Internal Revenue Service (IRS) for the EITC. Typically, clients have to provide the same basic information and required documentation multiple times if they are trying to access more than one program. Some states and localities have moved toward more use of call centers and online applications, though this varies among the programs and states. The complexity and variation in eligibility and other rules and requirements among the programs have contributed to time-consuming and duplicative administrative processes that are inefficient and add to overall costs. Separate eligibility processes for some programs result in considerable duplication of administrative activities because caseworkers in different offices collect and document much of the same personal and financial information. Even when programs are administered jointly, each has its own eligibility rules and reporting requirements, limiting the extent to which joint administration reduces administration costs. In our previous work, state and local officials reported that this complicated the work required of caseworkers to determine eligibility and also contributed to errors. Excessive time spent working through complex procedures can consume resources and diminish staffs’ ability to focus on other activities that might improve service quality or improve program integrity. In addition, other complex processes occur to meet federal cost allocation requirements. For example, we heard from some local staff that they track the amount of time they spend working on different programs and report this information to financial managers. Local financial managers then determine what portion of staffs’ time is defined as administrative costs in each of the programs and charge the programs appropriately. Providing similar services through separate programs can lead to additional inefficiencies. We recently reported on the potential overlap and duplication in employment and training programs. Specifically, we found that TANF, Workforce Investment Act Adult (WIA Adult), and Employment Service (ES) programs often maintain separate administrative structures to provide some of the same services, such as job search assistance, to low-income individuals. Some individuals may be receiving similar services from each program, although the extent to which this is occurring is not known. We recommended that Labor and HHS disseminate information on state efforts to consolidate administrative structures and colocate services. Both agencies agreed with our recommendation and we will follow up on their efforts in the future. While we have not reviewed all of the accountability measures for the relevant programs, we have identified some information gaps that hinder oversight of some programs. For example, our work on the TANF program has shown that work participation rates—a key performance measure for TANF, as currently measured and reported, do not appear to be achieving the intended purpose of encouraging states to engage specified proportions of TANF adult recipients in work activities. In addition, although states have shifted a large share of their TANF funds from cash assistance to other programs, supports, and services such as child care subsidies and child welfare, existing oversight mechanisms continue to focus on cash assistance. As a result, there are gaps in the information available at the federal level on how many families received TANF services and on how states have used funds to meet TANF goals. While a key feature of the TANF program is flexibility in the use of federal funds, this flexibility must be balanced with mechanisms to ensure state programs are held accountable for meeting program goals. Information gaps hinder decision makers in considering the success of TANF and what trade-offs might be involved in making any possible changes to TANF through the reauthorization process. In addition, in our work on potential duplication of TANF and WIA, we noted that lack of data hindered our ability to assess the extent to which individuals may have received services from both programs. We also identified information gaps that make it difficult to assess fully the federal role in supporting child care assistance for families. Such an assessment is also complicated by the use of tax expenditures in supporting families’ child care needs. With the flexibility allowed under TANF, states have used a significant portion of their TANF funds to augment their child care subsidy programs. However, states do not need to report on the numbers or types of families provided TANF-funded child care, leaving an incomplete picture of the numbers of children receiving federally-funded child care subsidies, which would be useful information for policymakers. In addition, because tax expenditures do not compete overtly with other priorities in the annual budget process, policymakers do not typically consider tax expenditures along with other programs when making budgetary and programmatic decisions. Nevertheless, considerable resources are provided to families through the Dependent Care Tax Credit for their child care and other dependent care needs. A more complete picture of the federal role in child care subsidies and who benefits would include tax expenditure information. We identified the importance of paying more attention to tax expenditures in our recent work on opportunities to reduce duplication in federal government programs. The need for improving the administration of these programs has been voiced recurrently for the past several decades. Stretching as far back as the 1960s, studies and reports have called for changes to human service programs, and we issued several reports during the 1980s that focused on welfare simplification. Over the years, Congress has taken many steps to simplify programs and procedures. For example, in 1996 Congress replaced the previous welfare program with the TANF block grant and consolidated several child care programs into one program, which our previous work has shown provided states with additional flexibility to design and operate programs. In addition, numerous pilot and demonstration projects have given particular states and localities flexibility to test approaches to integrating and coordinating services across a range of human service programs. Some states have taken advantage of recent changes and additional flexibility granted by the federal government to simplify eligibility determination processes across programs. For example, states may automatically extend eligibility to SNAP applicants based on their participation in the TANF cash assistance program—a provision referred to as “categorical eligibility.” While the need for simplification of program policies and other improvements has been widely acknowledged, there has also been a general recognition that achieving substantial improvements in this area is exceptionally difficult. Many of these efforts have had limited success due, in part, to the considerable challenges that streamlining program processes entail, given the involvement of numerous congressional committees and federal agencies involved in shaping human service program policies. An additional challenge to systematic policy simplification efforts is the lack of information on the costs and effects of these efforts. Streamlining policies could expand client access and increase caseloads and program costs, but it could also limit access for particular populations, depending on which policies were adopted. In addition, no definitive information exists to demonstrate the type and extent of changes that might result in reduced administrative costs or to demonstrate how strategies might work differently in different communities. To help address these issues, in 2001 and 2006, we recommended that Congress consider authorizing demonstration projects designed to streamline eligibility determination and other processes across federal human services programs. In the Consolidated Appropriations Act, 2010, Congress appropriated funds for pilot projects that, in part, demonstrate the potential to streamline administration or strengthen program integrity. Using the funds appropriated by Congress, the Partnership Fund for Program Integrity Innovation funds pilot projects that test and evaluate ideas for improving federal assistance programs through the following measures: reducing improper payments, improving administrative efficiency, improving service delivery, and protecting and improving program access for eligible beneficiaries. The current environment calls for continued and increased attention to this set of programs and opportunities to reduce inefficiencies. At both the federal and state levels of government, short-term and longer-term budgetary conditions require review of all federal programs and activities and efforts to make government more efficient and effective. Based on our review of our past and recent work, we have identified three approaches that warrant increased attention in this environment. 1. Simplifying policies and processes Simplifying policies and processes—especially those related to eligibility determination processes and various federal funding sources—could potentially save resources, improve productivity, and help staff focus more time on performing essential program activities, such as providing quality services and accurate benefits to recipients. In our 2006 report, we noted that many believe that being able to draw funds from more than one federal assistance program while simplifying the administrative requirements for managing those funds would ease states’ administrative workload and reduce administrative spending. This would also serve to help service providers better meet the complex needs of at-risk families. Such efforts are in keeping with the February 28, 2011, Presidential Memorandum issued for the heads of executive departments and agencies on the subject of administrative flexibility, lower costs, and better results for state, local, and tribal governments. Another way to streamline programs is consolidation. Consolidation has been a useful approach in the past to easing the burdens of federal rules and requirements, though care must be taken to ensure intended target groups still have their needs meet. In addition, adequate accountability measures can be challenging to design. 2. Facilitating technology enhancements Facilitating technology enhancements across programs may save administrative and benefit costs by creating more efficient processes and improving program integrity. Our previous work indicates that the federal government can help simplify processes and potentially reduce long-term costs by facilitating technology enhancements across programs and in states. Technology plays a central role in the management of human service programs and keeping up with technological advancements offers opportunities for streamlining eligibility processes, providing timely services, and improving program integrity. Along with technology enhancements, data-sharing arrangements, where permitted, allow programs to share client information that they otherwise would each collect and verify separately, thus reducing duplicative effort, saving money, and improving integrity. For example, by receiving verified electronic data from SSA, state human service offices are able to determine SSI recipients’ eligibility for Food Stamp benefits without having to separately collect and verify applicant information. According to officials we spoke with, this arrangement saves administrative dollars and reduces duplicative effort across programs. We also recently reported that more data matching of applicant information with existing databases could help prevent fraud in state CCDF programs. Progress on technology improvements could be further facilitated through greater collaboration across program agencies and levels of government as well as additional sharing of technology strategies among the states. For example, call centers and scanning of required documentation have been strategies used by some states to meet increasing workloads attributed to the weakened economy at the same time the states faced tightened budgets. 3. Fostering state innovation and evaluation for evidence-based decisionmaking In our complex, decentralized intergovernmental system, states and localities have frequently served as laboratories that foster innovation and test approaches that can benefit the nation. Providing states and localities with additional demonstration opportunities would allow them to challenge the current stovepipes and open the door to new cost-efficient approaches for administering human service programs. Demonstration projects would allow for testing and evaluating new approaches that aim to balance cost savings with ensuring program effectiveness and integrity. The information from these evaluations would help the federal government determine which strategies are most effective without investing time and resources in unproven strategies. Congress can allow for such approaches to thrive by not only giving states opportunities to test these approaches but by following up to identify and implement successful strategies. While it may be difficult to fully determine the extent to which observed changes are the result of the demonstration projects, such projects would be useful to identify lessons learned and help identify possible unintended consequences. Essential to all of these approaches is collaboration among many entities. We recently identified collaboration as a governmentwide management challenge. Achieving meaningful results in many policy and program areas requires some combination of coordinated efforts among various actors across federal agencies, with other governments at state and local levels, nongovernmental organizations, for-profit and not-for-profit contractors, and the private sector. Congress will increasingly need to rely on integrated approaches to help its decision making on the many issues requiring effective collaboration across federal agencies, levels of government, and sectors. In addition to collaboration, caution is urged in addressing any duplication and resulting inefficiencies in these programs that many individuals and families rely on. Because of the array of services provided to meet households’ various needs, it is not surprising to see various entities involved, with some fragmentation of administration, some overlap in populations served, and some duplication of services offered. These features may be warranted, for example, to ensure quality services are provided and certain populations are served. However, our work indicates that further exploration of the extent of fragmentation, overlap, and duplication is warranted to better identify ways to streamline and improve programs. We are happy to work with the Subcommittee to meets its needs in this area. We provided a draft of the reports we drew on for this testimony to the relevant agencies for their review and copies of the agency’s written responses can be found in the appendices of the relevant reports. Chairman Davis, this concludes my statement. I would be pleased to respond to any questions you, Ranking Member Doggett, or other Members of the Subcommittee may have. For questions about this statement, please contact me at (202) 512-7215 or brownke@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Rachel Frisk, Gale Harris, Kathryn Larin, and Yunsian Tai. Additional staff who contributed to this testimony include James Bennett, Susan Bernstein, Alexander Galuten, and Carla Rojas. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue, GAO-11-318SP, Washington, D.C.: March 1, 2011. Multiple Employment and Training Programs: Providing Information on Colocating Services and Consolidating Administrative Structures Could Promote Efficiencies, GAO-11-92, Washington, D.C.: January 13, 2011. Child Care and Development Fund: Undercover Tests Show Five State Programs Are Vulnerable to Fraud and Abuse, GAO-10-1062, Washington, D.C.: September 22, 2010. Temporary Assistance for Needy Families: Implications of Recent Legislative and Economic Changes for State Programs and Work Participation Rates, GAO-10-525, Washington, D.C.: May 28, 2010. Supplemental Nutrition Assistance Program: Payment Errors and Trafficking Have Declined, but Challenges Remain, GAO-10-956T, Washington, D.C.: July 28, 2010. Temporary Assistance for Needy Families: Implications of Recent Legislative and Economic Changes for State Programs and Work Participation Rates, GAO-10-525, Washington, D.C.: May 28, 2010. Child Care: Multiple Factors Could Have Contributed to the Recent Decline in the Number of Children Whose Families Receive Subsidies, GAO-10-344, Washington, D.C.: May 5, 2010. Domestic Food Assistance: Complex System Benefits Millions, but Additional Efforts Could Address Potential Inefficiency and Overlap among Smaller Programs, GAO-10-346, Washington, D.C.: April 15, 2010. Temporary Assistance for Needy Families: Fewer Eligible Families Have Received Cash Assistance Since the 1990s, and the Recession's Impact on Caseloads Varies by State, GAO-10-164, Washington, D.C.: February 23, 2010. Support for Low-Income Individuals and Families: A Review of Recent GAO Work, GAO-10-342R, Washington, D.C.: February 22, 2010. Highlights of a Forum: Ensuring Opportunities for Disadvantaged Children and Families, GAO-09-18SP, Washington, D.C.: November 13, 2008. Human Services Programs: Demonstration Projects Could Identify Ways to Simplify Policies and Facilitate Technology Enhancements to Reduce Administrative Costs, GAO-06-942 Washington, D.C.: September 19, 2006. Child Care: Additional Information Is Needed on Working Families Receiving Subsidies, GAO-05-667, Washington, D.C.: June 29, 2005. Means-Tested Programs: Information on Program Access Can Be an Important Management Tool, GAO-05-221, Washington, D.C.: March 11, 2005. Welfare Reform: Information on Changing Labor Market and State Fiscal Conditions, GAO-03-977, Washington, D.C.: July 15, 2003. Means-Tested Programs: Determining Financial Eligibility Is Cumbersome and Can Be Simplified, GAO-02-58, Washington, D.C.: November 2, 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government, often in concert with states, provides assistance to millions of individuals and families each year through a multiplicity of programs. These programs play a key role in supporting workers who have lost their jobs, families with low-incomes, and vulnerable children who have experienced abuse and neglect. However, given the fiscal pressures facing the federal government and the continued demands placed on assistance programs, it is critical that programs designed to serve those most in need provide benefits and services as effectively and efficiently as possible. In light of concerns about fragmentation, duplication, and overlap in government programs, this testimony addresses: (1) the key characteristics of some programs and tax expenditures that provide assistance to individuals and families; (2) problems in administering and providing services through multiple programs; and (3) actions that may help address these problems. We focused on programs under the jurisdiction of the Subcommittee of Human Resources and some related programs and tax expenditures for children and working-age adults; we developed an illustrative but not all-inclusive list of these programs. We relied on work conducted between 2001 and 2011, which employed an array of methodologies. These included surveys of federal and state officials; site visits to states and local areas; interviews with local, state, and federal officials; and analysis of agency data and documents. Various federal programs and tax expenditures exist to assist individuals and families by providing income support, child care, and child welfare services. Other programs help meet these households' needs in other areas, such as health and nutrition. Overall, several congressional committees as well as six federal agencies oversee these programs at the federal level, while federal agencies, state and local agencies, as well as for-profit and nonprofit agencies directly provide services at the local level. Families can receive benefits from one or more of these programs. For example, a low-income family may be eligible for and receive income support through Temporary Assistance for Needy Families (TANF), the Earned Income Tax Credit (EITC), and Child Support Enforcement, as well as subsidized child care assistance. This array of programs plays a key role in supporting those in need, but our work has shown it to be too fragmented and overly complex--for clients to navigate, for program operators to administer efficiently, and for program managers and policymakers to assess program performance. Individuals often must visit multiple offices to apply for aid and provide the same information and documentation each time--a process that is cumbersome and inefficient. The complexity and variation in eligibility rules and other requirements among programs contribute to time-consuming and duplicative administrative processes that add to overall costs. Some programs provide similar services through separate programs, resulting in additional inefficiencies. For example, we recently reported that TANF, Workforce Investment Act Adult (WIA Adult), and Employment Service (ES) programs often maintain separate administrative structures to provide some of the same services and activities, such as job search assistance, to low-income individuals. In addition, gaps in information can hamper program oversight. Approaches such as simplifying policies, improving technology, and fostering innovation and evaluation can improve services and reduce costs. Simplifying policies can improve productivity and help staff focus more time on activities such as ensuring the accuracy of benefits. Facilitating technology enhancements can streamline eligibility processes and improve program integrity. In addition, fostering state innovation and evaluation can help the federal government and policymakers determine which approaches are the most cost-effective and limit investment in unproven strategies. Because federal programs have evolved over time to meet various needs, it is not surprising to see multiple programs with some fragmentation of administration, some overlap in populations served, and some duplication of services offered. These features may be warranted, for example, to ensure quality services are provided and certain populations are served. However, our work indicates that further exploration of the extent of fragmentation, overlap, and duplication could help better identify ways to streamline and improve programs and to reduce inefficiencies.
The number of defined-contribution pension plans, especially 401(k)s, has been growing, and by 1993, they accounted for 88 percent of all pension plans and 61 percent of all active pension-plan participants. Many participants’ defined-contribution plans supplement another pension plan. A 401(k) pension, or salary-reduction, plan is a defined-contribution plan that allows participants to contribute, before taxes, a portion of their salary to a qualified retirement account. Investment income earned on 401(k) account balances accumulates tax-free until the individual withdraws the funds at retirement. However, participation in 401(k) is voluntary, and contribution levels are determined by the individual but can be no larger than $9,500 per year. About 85 percent of 401(k) pension plans are the only pension plan sponsored by the employer, although the majority of 401(k) plan participants are covered by another pension plan. A recent study of selected 401(k) plans shows that worker participation rates for these plans varies from about 50 percent to over 90 percent. Participant contributions to 401(k) accounts, on average, are about 7 percent of earnings. To encourage participation in and contributions to these pension plans, plan sponsors may wholly or partially match employee contributions and provide education on the importance of retirement saving. In addition, over half of all 401(k) pension plans allow participants to borrow from their pension accounts. Borrowing from 401(k) pension plans is legally permissible and allows plan participants to borrow the lesser of $50,000 or one half of their vested account balance. The term of the loan cannot exceed 5 years, unless the loan is used to purchase a primary home. Furthermore, the loans are generally offered at very favorable interest rates. A recent survey of 401(k) plans found that about 70 percent of the plans that allow borrowing charge an interest rate equal to or less than the prime rate plus one percentage point, while less than 10 percent charge an interest rate equal to the local bank’s lending rate. The repayment of loan principal and, unless the primary residence is used to secure the loan, interest payments are not tax-deductible. Failure to repay the loan results in the outstanding loan balance being considered a taxable pension distribution. The borrower is then responsible for paying all taxes on the distribution plus a 10-percent early withdrawal penalty if the borrower is under 59-1/2 years old. Overall, about half of all firms with 100 or more employees that offer savings and thrift plans permit participant loans. Previously, GAO reported that 57 percent of the firms with 100 to 499 employees that offer a 401(k) plan permit participant loans. Similarly, 80 percent of firms with 500 to 4,999 employees and 46 percent of firms with 5,000 or more employees permit loans against 401(k) accounts. In addition, over 95 percent of 401(k) plans that offer loans have at least one plan participant with an outstanding loan. Pension-plan loan provisions, however, are controversial. Advocates argue that loan provisions are an incentive to lower-income workers to participate in pension plans where participation is voluntary. Furthermore, many 401(k) plan administrators think loan provisions also have a somewhat positive impact on participants’ contribution rates to pension accounts. A survey conducted by the Employee Benefit Research Institute (EBRI) suggests that most workers think that participants should be able to withdraw retirement funds to pay for financial emergencies, to buy a house, or to pay for a child’s education. Workers may be more willing to save for retirement if they can have access to their savings for emergencies before retirement. Opponents of loan provisions argue that permitting participants to borrow from their retirement accounts works against the policy objective of enhancing retirement income. Almost half of the companies that do not permit 401(k) loans surveyed by William M. Mercer say that loan programs are contrary to plan philosophy. Almost 60 percent of employed respondents to a recent EBRI survey think about using their own retirement funds only at the time of retirement. Our analysis of the two databases on worker characteristics and pension-account activity shows that pension-plan borrowing increases participation in 401(k) plans (see app. II). However, a number of other factors, such as employer matching and size of firm, also influence participation and contribution amounts. Participation rates in plans with loan provisions are about 6 percentage points higher than plans with no loan provisions (see fig. 1). Employer matching also increases participation rates by about 20 percentage points depending on the match rate. These findings are consistent with the results of other research. Under the typical situation—where the employer contributes about half of what participants contribute—borrowing provisions plus employer matching increases participation by about 28 percentage points—from 55 percent to 83 percent. Our analysis also indicates that smaller firms tend to have slightly higher participation rates than larger firms. This may be due to smaller firms more effectively targeting benefits to employee needs. In addition, a recent study found that the type and quality of information provided to employees on 401(k) plans may also be an important factor in encouraging employee participation in 401(k) pension plans. The impact of providing high-quality information appears to be greatest on workers with lower earnings. In our analysis of 401(k) plans, we also found that average annual employee contribution amounts are 35 percent higher in 401(k) plans with loan provisions than in 401(k) plans with no loan provisions. Employer matching also increases average contribution amounts in 401(k) plans but not to the same extent as loan provisions. Depending on the employer match rate, we estimate that average annual employee contribution amounts are typically 10- to 24-percent higher than with no employer matching. The effect of both loan provisions and employer matching can be dramatic—an increase in average contribution amounts of over $600 per year (see fig. 2). Furthermore, one study suggests that providing high-quality pension-plan information to plan participants may also increase contribution levels to 401(k) plans. These results are further corroborated when we examined individual participant contributions to 401(k) pension accounts. We estimate that a typical 401(k) participant covered by a pension with loan provisions and receiving an average employer match rate will contribute a higher proportion of earnings to his or her 401(k) account than an identical participant covered by a plan with no loan provision or employer matching—8.6 percent versus 4.9 percent (see fig. 3). Plan participants with no outstanding plan loans are in a better financial position than borrowers. Plan borrowers, on average, have less family income, lower net worth, and more nonhousing debt than nonborrowers. Total family income of borrowers is 83 percent of that of nonborrowers (see table 1). The total net worth and nonhousing net worth of borrowers is also considerably lower than that of nonborrowers. In addition, retirement-account borrowers have about $1,500 more in nonhousing debt and have much higher nonhousing-debt-to-income ratios than nonborrowers. Nevertheless, our analysis indicates that 401(k) plan participants who also are covered by another pension plan are 50-percent more likely to have an outstanding loan than other participants (see app. II). Those with only a 401(k) pension plan—and, thus, with the most to lose by borrowing from their pension accounts—are less likely to do so. But participants who have recently been turned down for a loan from another source are almost 40-percent more likely to borrow against their pension account than other plan participants, holding all else equal. Black and Hispanic pension-plan participants are almost twice as likely as white participants to borrow against their pension account (see app. II), after controlling for income and assets. Minorities may have more difficulties obtaining commercial loans, including mortgages. Our results also indicate that other characteristics of an individual, such as age, gender, and marital status, do not significantly affect pension-plan borrowing. Pension-plan borrowers may use their pension-plan loan for living expenses, an automobile purchase, or housing (rather than borrowing from a commercial source to finance a home purchase), all of which could be considered necessities. A smaller proportion of pension-plan borrowers report having housing debt than nonborrowers, but a larger proportion report having education loans (see table 2). Attitudes toward borrowing money also differ between plan borrowers and nonborrowers. A larger proportion of plan borrowers think it is all right to borrow to finance an automobile, but a slightly smaller proportion think it is all right to borrow to finance education expenses. Almost half of the plan borrowers say it is all right to borrow money to cover living expenses compared to about a third of nonborrowers. Less than 10 percent of each group think it is all right to borrow to finance luxury goods, such as jewelry, and less than 10 percent of plan borrowers think it is all right to borrow to cover the expenses of a vacation. This suggests that relatively few participants—whether borrowers or nonborrowers—would elect to borrow against their pension accounts to finance the purchase of nonnecessities. Pension-plan participants who borrow from their pension accounts risk having substantially lower pension balances at retirement. Under reasonable assumptions about pension-plan savings and borrowing behavior, a borrower could have 2- to 28-percent less pension income at retirement (see app. II). Many 401(k) participants have a substantial amount of their pension balances invested in the stock market and earn a relatively high rate of return. Pension-plan loans, however, generally have a favorable interest rate, which may be much lower than the return on the pension-account investments. Consequently, a borrower may earn less on the loan balance because he or she is making interest payments to the account at the relatively low interest rate rather than earning higher returns from investments, such as equities. How much pension income is lost depends on the size of the loan, the interest rate of the loan, the rate of return of pension account investments, and whether or not the borrower continues to make pension contributions while repaying the loan. For example, if a borrower decides to forgo making pension-plan contributions during loan repayment, he or she could have over 20-percent less retirement income. Continuing pension-plan contributions while repaying the loan, on the other hand, could lead to a relatively small retirement income loss of less than 7 percent. People save for many reasons, including retirement, emergencies, home purchase, and a college education. Saving for retirement receives favorable tax treatment, but in the past, it was at the cost of being virtually inaccessible until late in life. Since retirement savings could not be used for other purposes, people were reluctant to save in retirement accounts. Allowing participants to borrow against their 401(k) pension accounts for reasons unrelated to retirement can increase both participation in these plans and participant contributions. However, pension-plan borrowing is a two-edged sword: Individuals who were prompted to participate because of the borrowing provision increase their retirement savings, but individuals who opt to borrow lose some of the tax advantages to retirement savings and risk having less income at retirement. Our findings have implications for other sources of retirement income. Since participation in IRAs is voluntary, our results suggest that early access to IRA funds may increase both participation in and contributions to these accounts but at the risk of lower retirement income. On the other hand, individual Social Security accounts—if created—would require participation, and contribution levels would be set by law. Consequently, individual Social Security accounts would not benefit from the positive aspects of borrowing provisions, but the borrowing provisions would increase the risk of reduced retirement income. We asked pension plan experts to comment on a draft of this report. They generally agreed with the study approach and results. They made a few technical suggestions, which we incorporated where appropriate. We are sending copies of this report to the Secretary of Labor, relevant congressional committees, and other interested parties. We will make copies available to others on request. This report was prepared under my direction. Please contact Francis P. Mulvey, Assistant Director, at (202) 512-3592 or Thomas L. Hungerford, Senior Economist, at (202) 512-7028 if you or your staff have any questions concerning this report. To determine how pension-plan borrowing affects workers’ participation in and contributions to a pension plan and retirement income, we addressed the following questions: Does the ability to borrow from defined contribution pension accounts increase participation in and contributions to 401(k) pension plans? What are the demographic and economic characteristics of workers who borrow from their pension accounts? What are the potential consequences for participants who borrow from their retirement accounts? To conduct our work, we analyzed two data sources. The first, the 1992 Survey of Consumer Finances prepared by the Federal Reserve, provided a nationally representative individual-level sample. The second, the 1992 research database of Internal Revenue Service (IRS) Form 5500 reports, which are maintained by the Pension and Welfare Benefits Administration of the Department of Labor, provided a nationally representative plan-level sample. We also reviewed the relevant technical literature and talked to pension experts. The Survey of Consumer Finances randomly sampled 3,906 households regarding current and past employment by family members, assets and debts, and demographic information. Included in the current employment portion of the survey were detailed questions about pension participation. From the survey, we created a database containing information on respondents and their spouses who were working and between the ages of 18 and 64 at the time of the survey. We did not independently verify the accuracy of the Survey of Consumer Finances database because it is commonly used by researchers. We used the Survey of Consumer Finances to determine the effects of pension-plan borrowing on participation in and contributions to 401(k) pension plans and to describe the demographic and economic characteristics of workers who borrow from their pension accounts. For the analysis of the impact of borrowing on contributions to pension accounts, the subsample of the survey contained information on 477 workers who participate in a 401(k) pension plan. Since the dependent variable is a continuous variable, which can be no less than zero, the multivariate regression estimation technique used is a tobit model. A tobit model takes into account the fact that the participation rate can be no less than 0 percent, and the results from this model will not predict a participation rate of less than 0 percent. Let C* be an individual’s desired contribution rate, which is affected by the individual’s characteristics. If the desired contribution rate is greater than zero, then the individual contributes to his or her pension account. If it is less than or equal to zero, then the individual does not contribute to his or her account. Formally, the model is written as where the X vector contains the variables, the b parameters to be estimated, and the last term is the random error that captures the unobserved factors affecting the desired contribution rate. The dependent variable—that is, the observed contribution rate—is C = C* if C*>0 C = 0 if C*£ 0. To describe the demographic and economic characteristics of workers who borrow from their pension accounts, the subsample we used for our analysis contained information on 769 workers with defined-contribution pension plans that allowing borrowing. We were interested in determining how participant characteristics affect the likelihood or probability that an individual has an outstanding loan against his or her pension account. The dependent variable for this analysis is a variable that is equal to one if the individual has an outstanding pension-account loan and equal to zero if he or she does not have an outstanding loan. The multivariate estimation technique used for the analysis is a logit model, which will prevent predictions from being outside the probability range of 0 to 1. In the logit model, the probability that an individual will have an outstanding pension plan loan is a function of the individual’s characteristics: P = f(b where P is the probability, the X vector contains the variables or characteristics used in the estimation, the b parameters to be estimated, and f is the cumulative logistic probability function. The parameter vector is estimated using maximum likelihood techniques. The primary variables of interest are whether or not a worker can withdraw funds from his or her pension account, the proportion of salary contributed to the defined-contribution pension plan account, and whether or not the worker has an outstanding pension-plan loan. The Survey of Consumer Finances asks respondents who have defined-contribution pension plans, “Can you borrow against that account?” and “If you needed money in an emergency, could you withdraw some of the funds in that account?” If the answer to either of these questions was “yes,” we considered that plan as allowing participants to withdraw funds from their account before retirement. Respondents to the survey also were asked how much they contribute to their pension account. The contribution rate is the ratio of the respondent’s contribution to his or her salary. Other variables used in the analysis include sex, race, income, net worth, education, recent loan experiences, and whether or not the individual is covered by another pension plan (see table I.1). L = 0 if L*£ 0. See Greene, Econometric Analysis, ch. 19, for the derivation of the likelihood function. The natural logarithm of the number of years the worker has been covered by his or her pension plan (continued) We used IRS’ Form 5500 research database for 1992 to determine the effects of pension-plan borrowing on participation in and contributions to 401(k) pension plans. Under the Employee Retirement Income Security Act of 1974, private employers must annually file a separate Form 5500 with the IRS for each employee’s pension plan. Each report contains financial, participant, and actuarial information. We did not independently verify the accuracy of the Form 5500 research database because this database is commonly used by researchers. The 1992 Form 5500 research database was obtained from the Pension and Welfare Benefits Administration of the Department of Labor. The plans selected for analysis are plans that had 100 or more participants and offered defined-contribution plans with 401(k) features as the primary plan. All plans that were terminated during the year or where there was a resolution to terminate the plan are not included in the sample. Furthermore, we selected only plans that had one or more active participants, that is, those with pension accounts. The final sample used in the analysis contains 7,245 plans with an average of 337 active participants. The analysis consists of estimating two multivariate statistical models. The first model estimated examined the impacts of firm and plan characteristics on participation in the plan. The dependent variable is the percent of employees eligible to participate who participate in the plan.The second model examines average employee contributions to the plan. Ordinary least squares regression techniques were used to estimate both models. Formally, the models can be expressed as where Y is the dependent variable, which is either the participation rate or the natural logarithm of average contribution amounts; the X vector contains the independent variables; the b to be estimated; and the last term is the random error that captures the unobserved factors influencing the dependent variable. The first dependent variable is the ratio of active participants to the number of all employees eligible to participate in the plan. The second dependent variable is the natural logarithm of the average contribution rate. The average employee contribution variable is the ratio of total contributions to the plan to the number of active participants. The independent variables used in the analysis are variables used by other researchers, such as employer matching and firm size, plus a variable denoting if the plan participants had any outstanding loans (see table I.2). To determine the potential consequences of borrowing from a 401(k) account, we prepared a simulation model. We created a 35-year annual earnings series with a starting salary of $25,000. Annual earnings were allowed to grow with age and with inflation (assumed to be 3 percent). The contributions to the 401(k) account are 6.8 percent of annual earnings. We assumed that the 401(k) account balance earns an annual rate of return of 11 percent. The simulation involves a $40,000 loan against the pension account made in the 15th year and paid back over a 10-year period in equal installments. Pension account balances were determined for several different loan interest rates. We created simulations under two extreme scenarios: (1) the borrower continues to make contributions to the 401(k) account while repaying the loan and (2) the borrower suspends making contributions to the 401(k) account while repaying the loan. This appendix contains supplementary tables of multivariate statistical results from the two databases that we used to conduct our work. The coefficient estimates from the regression model of the participation rate are shown in table II.1. The coefficient estimates indicate the effect of a change in an independent variable on a plan’s participation rate holding the values of all other independent variables constant. For example, the coefficient estimate of 0.0591 for the borrowing variable indicates that plans that allow participant borrowing have participation rates that are about 6 percentage points higher than plans that do not allow borrowing. Coefficient estimate (standard error) –0.0020(0.0006) Firm size squared (x1000) The regression results of the effects of pension-plan characteristics on average employee contribution levels are reported in table II.2. The coefficient estimates indicate the effect of a change in an independent variable on average contribution levels holding the values of all other independent variables constant. For example, the coefficient estimate of 0.3682 for the borrowing variable indicates that borrowing provisions increase average employee contribution levels by 36.8 percent. Table II.2: Regression Results for Average Employee Contribution Levels (Dependent Variable: Natural Logarithm of Average Employee Contribution) Coefficient estimate (standard error) –0.0043(0.0015) Firm size squared (x1000) 0.0119(0.0044) 0.3682(0.0167) –0.1300(0.0231) 0.1608(0.0410) 0.1042(0.0302) 0.1205(0.0313) 0.2427(0.0317) 0.1222(0.0343) 0.1779(0.0412) 0.1460(0.0446) 0.1838(0.0464) 0.1615(0.0529) 0.1211(0.0569) 0.0991(0.0374) (0.0493) The tobit-model results of individual pension-plan participants reported in table II.3 examine the influence of participant characteristics on contribution rates holding all other characteristics constant. When a variable changes, it will have two effects on the overall contribution rate. First, for individuals already making a contribution, an increase in a variable with a positive coefficient estimate will directly increase the contribution rate. Second, for individuals who are not making contributions to their 401(k) accounts, an increase in this variable will increase the likelihood that they contribute to their plan account. The marginal impacts of a variable change reported in table II.3 include both these impacts on the expected value of the contribution rate. For example, the marginal impact of 3.0247 for the borrowing variable indicates that, on average, contribution rates of participants in plans with borrowing provisions are about 3 percentage points higher than for participants in other plans. Coefficient estimate (standard error) (0.3244) Age 35 to 44 years –0.8662(0.4105) Age 45 to 54 years 1.4346(0.4576) Age 55 to 64 years (0.5946) –1.7866(0.4298) Dropped out of school before the 12th grade (1.1460) 1 to 4 years of college; no degree –0.6867 (0.4776) 4 or more years of college; college degree –1.5289(0.4462) –0.5284 (0.7617) (continued) Coefficient estimate (standard error) (1.0690) Covered by another pension plan 1.3272(0.3223) Natural logarithm of employer match rate 0.7589(0.1620) Can withdraw funds from pension account 3.7530(0.4430) Family income $25,000 to $34,999 per year (0.7846) Family income $35,000 to $44,999 per year 4.8156(0.8101) Family income $45,000 to $59,999 per year 2.9547(0.7905) Family income $60,000 to $74,999 per year 4.1386(0.8635) Family income $75,000 or more per year 4.5341(0.8446) Natural logarithm of number of years covered by this defined-contribution plan –0.3309 (0.1900) Family net worth $50,001 to $100,000 1.6156(0.4819) Family net worth $100,001 to $250,000 2.2099(0.5198) Family net worth $250,001 to $1,000,000 3.6329(0.6248) Family net worth over $1,000,000 (0.6746) 7.0546(0.1142) A logit model was estimated to determine the magnitude of the effects of participant characteristics on the likelihood of having an outstanding pension-plan loan (see table II.4). The coefficient estimates do not indicate the magnitude of the impacts on the likelihood of having an outstanding loan due to changes in the variables. Consequently, the marginal impacts of changes in the variables on the likelihood were calculated and are reported in the third column of table II.4. For example, the marginal impact of 0.0578 for black participants indicates that the likelihood of blacks having an outstanding loan is 5.8 percentage points higher than for whites. Given that about 7.6 percent of plan participants have outstanding loans, then blacks are about 5.8/7.6 times 100—or 76 percent—more likely to have an outstanding pension-plan loan than whites. Parameter estimate (standard error) –0.2854 (0.1550) Age 35 to 44 years (0.1920) Age 45 to 54 years (0.2184) Age 55 to 64 years –0.1426 (0.3006) (0.1976) Dropped out of school before the 12th grade (0.3921) 1 to 4 years of college; no degree (0.2291) 4 or more years of college; college degree 0.7249(0.2127) 1.1337(0.2068) 1.7382(0.2934) Covered by another pension plan 0.7651(0.1515) Recently turned down for loan 0.5829(0.1855) Family income $25,000 to $34,999 per year (0.2937) Family income $35,000 to $44,999 per year (0.2975) Family income $45,000 to $59,999 per year –0.3429 (0.3128) (continued) Parameter estimate (standard error) Family income $60,000 to $74,999 per year –0.4968 (0.3411) Family income $75,000 or more per year –1.0754(0.3522) Natural logarithm of number of years covered by this defined-contribution plan 0.2608(0.0900) Family net worth $50,001 to $100,000 –0.9459(0.2722) Family net worth $100,001 to $250,000 –0.0553 (0.2251) Family net worth $250,001 to $1,000,000 –0.3286 (0.2913) Family net worth over $1,000,000 –0.1263 (0.3226) Our simulation results are presented in table II.5 and show the pension account balance after 35 years for each scenario. The results show that as long as the interest rate of the loan is less than the rate of return of the pension account balance (assumed to be 11 percent), borrowers will have a lower account balance at retirement. The actual reduction depends on the gap between the account rate of return and the loan interest rate, and whether or not pension contributions continue during the loan repayment period. Furthermore, these results hold only if the loan is repaid. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO: (1) determined the effects of pension-plan borrowing on participation in and contributions to 401(k) pension plans; (2) described the demographic and economic characteristics of workers who borrow from their pension accounts; and (3) identified the potential consequences for participants who borrow from their pension accounts. GAO found that: (1) plans that allow borrowing have a somewhat higher proportion of employees participating than other plans, all other factors being equal; (2) in addition to employer matching, allowing borrowing increases participation among eligible employees, especially lower-income employees; (3) allowing pension-plan borrowing also significantly affects how much employees contribute; (4) participants in plans that allow borrowing contribute, on average, 35 percent more to their pension accounts than participants in plans that do not allow borrowing; (5) based on individual financial data GAO examined, relatively few plan participants--less than 8 percent--have one or more loans from their pension accounts; (6) this is true for a point in time and would not include those who had repaid a past loan or who might borrow in the future; (7) blacks and hispanics, lower-income individuals, participants who have recently been turned down for a loan, and workers who also are covered by other pension plans are more likely to borrow from their pension account than other participants; (8) plan borrowers, on average, have fewer assets than nonborrowers and have more nonhousing debt relative to income than nonborrowers; (9) while borrowing provisions may reduce retirement income, they also can encourage workers to save for their retirement; (10) loan provisions of many pension plans provide for repaying the loan at favorable interest rates, which may be lower than the investment yield that could have been earned had the money been left in the pension account; (11) consequently, the borrower will have a smaller pension balance at retirement, since the interest paid to the account is less than what the account balance could have earned form investing in equities; however, other potential effects of borrowing could outweigh these disadvantages; (12) if loan provisions influenced the employee's decision to participate in the pension plan, the employee's retirement income would likely have been even less had there not been such provisions; (13) allowing participants to borrow from their defined-contribution pension plan, therefore, may be a double-edged sword; and (13) there are both advantages and disadvantages to borrowing from other voluntary retirement savings accounts, such as individual retirement accounts, however, few of the positive effects of pension-plan borrowing would be realized in mandatory retirement programs like Social Security.
Created in 1961, Peace Corps is mandated by statute to help meet developing countries’ needs for trained manpower while promoting mutual understanding between Americans and other peoples. Volunteers commit to 2-year assignments in host communities, where they work on projects such as teaching English, strengthening farmer cooperatives, or building sanitation systems. By developing relationships with members of the communities in which they live and work, volunteers contribute to greater intercultural understanding between Americans and host country nationals. Volunteers are expected to maintain a standard of living similar to that of their host community colleagues and co-workers. They are provided with stipends that are based on local living costs and housing similar to their hosts. Volunteers are not supplied with vehicles. Although the Peace Corps accepts older volunteers and has made a conscious effort to recruit minorities, the current volunteer population has a median age of 25 years and is 85 percent white. More than 60 percent of the volunteers are women. Volunteer health, safety, and security is Peace Corps’ highest priority, according to the agency. To address this commitment, the agency has adopted policies for monitoring and disseminating information on the security environments in which the agency operates, training volunteers, developing safe and secure volunteer housing and work sites, monitoring volunteers, and planning for emergencies such as evacuations. Headquarters is responsible for providing guidance, supervision, and oversight to ensure that agency policies are implemented effectively. Peace Corps relies heavily on country directors—the heads of agency posts in foreign capitals—to develop and implement practices that are appropriate for specific countries. Country directors, in turn, rely on program managers to develop and oversee volunteer programs. Volunteers are expected to follow agency policies and exercise some responsibility for their own safety and security. Peace Corps emphasizes community acceptance as the key to maintaining volunteer safety and security. The agency has found that volunteer safety is best ensured when volunteers are well integrated into their host communities and treated as extended family and contributors to development. Reported incidence rates of crime against volunteers have remained essentially unchanged since we completed our report in 2002. Reported incidence rates for most types of assaults have increased since Peace Corps began collecting data in 1990, but have stabilized in recent years. The reported incidence rate for major physical assaults has nearly doubled, averaging about 9 assaults per 1,000 volunteer years in 1991-1993 and averaging about 17 assaults in 1998-2000. Reported incidence rates for major assaults remained unchanged over the next 2 years. Reported incidence rates of major sexual assaults have decreased slightly, averaging about 10 per 1,000 female volunteer years in 1991-1993 and about 8 per 1,000 female volunteer years in 1998-2000. Reported incidence rates for major sexual assaults averaged about 9 per 1,000 female volunteer years in 2001-2002. Peace Corps’ system for gathering and analyzing data on crime against volunteers has produced useful insights, but we reported in 2002 that steps could be taken to enhance the system. Peace Corps officials agreed that reported increases are difficult to interpret; the data could reflect actual increases in assaults, better efforts to ensure that agency staff report all assaults, and/or an increased willingness among volunteers to report incidents. The full extent of crime against volunteers, however, is unknown because of significant underreporting. Through its volunteer satisfaction surveys, Peace Corps is aware that a significant number of volunteers do not report incidents, thus reducing the agency’s ability to state crime rates with certainty. For example, according to the agency’s 1998 survey, volunteers did not report 60 percent of rapes and 20 percent of nonrape sexual assaults. Reasons cited for not reporting include embarrassment, fear of repercussions, confidentiality concerns, and a belief that Peace Corps could not help. In 2002, we observed that opportunities for additional analyses existed that could help Peace Corps develop better-informed intervention and prevention strategies. For example, our analysis showed that about a third of reported assaults after 1993 occurred from the fourth to the eighth month of service—shortly after volunteers completed training, arrived at sites, and began their jobs. We observed that this finding could be explored further and used to develop additional training. Since we issued our report, Peace Corps has taken steps to strengthen its efforts for gathering and analyzing crime data. The agency has hired an analyst responsible for maintaining the agency’s crime data collection system, analyzing the information collected, and publishing the results for the purpose of influencing volunteer safety and security policies. Since joining the agency a year ago, the analyst has focused on redesigning the agency’s incident reporting form to provide better information on victims, assailants, and incidents and preparing a new data management system that will ease access to and analysis of crime information. However, these new systems have not yet been put into operation. The analyst stated that the reporting protocol and data management system are to be introduced this summer, and responsibility for crime data collection and analysis will be transferred from the medical office to the safety and security office. According to the analyst, she has not yet performed any new data analyses because her focus to date has been on upgrading the system. We reported that Peace Corps’ headquarters had developed a safety and security framework but that the field’s implementation of this framework was uneven. The agency has taken steps to improve the field’s compliance with the framework, but recent Inspector General reports indicate that this has not been uniformly achieved. We previously reported that volunteers were generally satisfied with the agency’s training programs. However, some volunteers had housing that did not meet the agency’s standards, there was great variation in the frequency of staff contact with volunteers, and posts had emergency action plans with shortcomings. To increase the field’s compliance with the framework, in 2002, the agency hired a compliance officer at headquarters, increased the number of field- based safety and security officer positions, and created a safety and security position at each post. However, recent Inspector General reports continued to find significant shortcomings at some posts, including difficulties in developing safe and secure sites and preparing adequate emergency action plans. In 2002, we found that volunteers were generally satisfied with the safety training that the agency provided, but we found a number of instances of uneven performance in developing safe and secure housing. Posts have considerable latitude in the design of their safety training programs, but all provide volunteers with 3 months of preservice training that includes information on safety and security. Posts also provide periodic in-service training sessions that cover technical issues. Many of the volunteers we interviewed said that the safety training they received before they began service was useful and cited testimonials by current volunteers as one of the more valuable instructional methods. In both the 1998 and 1999 volunteer satisfaction surveys, over 90 percent of volunteers rated safety and security training as adequate or better; only about 5 percent said that the training was not effective. Some regional safety and security officer reports have found that improvements were needed in post training practices. The Inspector General has reported that volunteers at some posts said cross-cultural training and presentations by the U.S. embassy’s security officer did not prepare them adequately for safety-related challenges they faced during service. Some volunteers stated that Peace Corps did not fully prepare them for the racial and sexual harassment they experienced during their service. Some female volunteers at posts we visited stated that they would like to receive self-protection training. Peace Corps’ policies call for posts to ensure that housing is inspected and meets post safety and security criteria before the volunteers arrive to take up residence. Nonetheless, at each of the five posts we visited, we found instances of volunteers who began their service in housing that had not been inspected and had various shortcomings. For example, one volunteer spent her first 3 weeks at her site living in her counterpart’s office. She later found her own house; however, post staff had not inspected this house, even though she had lived in it for several months. Poorly defined work assignments and unsupportive counterparts may also increase volunteers’ risk by limiting their ability to build a support network in their host communities. At the posts we visited, we met volunteers whose counterparts had no plans for the volunteers when they arrived at their sites, and only after several months and much frustration did the volunteers find productive activities. We found variations in the frequency of staff contact with volunteers, although many of the volunteers at the posts we visited said they were satisfied with the frequency of staff visits to their sites, and a 1998 volunteer satisfaction survey reported that about two-thirds of volunteers said the frequency of visits was adequate or better. However, volunteers had mixed views about Peace Corps’ responsiveness to safety and security concerns and criminal incidents. The few volunteers we spoke with who said they were victims of assault expressed satisfaction with staff response when they reported the incidents. However, at four of the five posts we visited, some volunteers described instances in which staff were unsupportive when the volunteers reported safety concerns. For example, one volunteer said she informed Peace Corps several times that she needed a new housing arrangement because her doorman repeatedly locked her in or out of her dormitory. The volunteer said staff were unresponsive, and she had to find new housing without the Peace Corps’ assistance. In 2002, we reported that, while all posts had tested their emergency action plan, many of the plans had shortcomings, and tests of the plans varied in quality and comprehensiveness. Posts must be well prepared in case an evacuation becomes necessary. In fact, evacuating volunteers from posts is not an uncommon event. In the last two years Peace Corps has conducted six country evacuations involving nearly 600 volunteers. We also reported that many posts did not include all expected elements of a plan, such as maps demarcating volunteer assembly points and alternate transportation plans. In fact, none of the plans contained all of the dimensions listed in the agency’s Emergency Action Plan checklist, and many lacked key information. In addition, we found that in 2002 Peace Corps had not defined the criteria for a successful test of a post plan. Peace Corps has initiated a number of efforts to improve the field’s implementation of its safety and security framework, but Inspector General reports continued to find significant shortcomings at some posts. However, there has been improvement in post communications with volunteers during emergency action plan tests. We reviewed 10 Inspector General reports conducted during 2002 and 2003. Some of these reports were generally positive—one congratulated a post for operating an “excellent” program and maintaining high volunteer morale. However, a variety of weaknesses were also identified. For example, the Inspector General found multiple safety and security weaknesses at one post, including incoherent project plans and a failure to regularly monitor volunteer housing. The Inspector General also reported that several posts employed inadequate site development procedures; some volunteers did not have meaningful work assignments, and their counterparts were not prepared for their arrival at site. In response to a recommendation from a prior Inspector General report, one post had prepared a plan to provide staff with rape response training and identify a local lawyer to advise the post of legal procedures in case a volunteer was raped. However, the post had not implemented these plans and was unprepared when a rape actually occurred. Our review of recent Inspector General reports identified emergency action planning weaknesses at some posts. For example, the Inspector General found that at one post over half of first year volunteers did not know the location of their emergency assembly points. However, we analyzed the results of the most recent tests of post emergency action plans and found improvement since our last report. About 40 percent of posts reported contacting almost all volunteers within 24 hours, compared with 33 percent in 2001. Also, our analysis showed improvement in the quality of information forwarded to headquarters. Less than 10 percent of the emergency action plans did not contain information on the time it took to contact volunteers, compared with 40 percent in 2001. In our 2002 report, we identified a number of factors that hampered Peace Corps efforts to ensure that this framework produced high-quality performance for the agency as a whole. These included high staff turnover, uneven application of supervision and oversight mechanisms, and unclear guidance. We also noted that Peace Corps had identified a number of initiatives that could, if effectively implemented, help to address these factors. The agency has made some progress but has not completed implementation of these initiatives. High staff turnover hindered high quality performance for the agency. According to a June 2001 Peace Corps workforce analysis, turnover among U.S. direct hires was extremely high, ranging from 25 percent to 37 percent in recent years. This report found that the average tenure of these employees was 2 years, that the agency spent an inordinate amount of time selecting and orienting new employees, and that frequent turnover produced a situation in which agency staff are continually “reinventing the wheel.” Much of the problem was attributed to the 5-year employment rule, which statutorily restricts the tenure of U.S. direct hires, including regional directors, country desk officers, country directors and assistant country directors, and Inspector General and safety and security staff. Several Peace Corps officials stated that turnover affected the agency’s ability to maintain continuity in oversight of post operations. In 2002, we also found that informal supervisory mechanisms and a limited number of staff hampered Peace Corps efforts to ensure even application of supervision and oversight. The agency had some formal mechanisms for documenting and assessing post practices, including the annual evaluation and testing of post emergency action plans and regional safety and security officer reports on post practices. Nonetheless, regional directors and country directors relied primarily on informal supervisory mechanisms, such as staff meetings, conversations with volunteers, and e-mail to ensure that staff were doing an adequate job of implementing the safety and security framework. One country director observed that it was difficult to oversee program managers’ site development or monitoring activities because the post did not have a formal system for performing these tasks. We also reported that Peace Corps’ capacity to monitor and provide feedback to posts on their safety and security performance was limited by the small number of staff available to perform relevant tasks. We noted that the agency had hired three field-based safety and security specialists to examine and help improve post practices, and that the Inspector General also played an important role in helping posts implement the agency’s safety and security framework. However, we reported that between October 2000 and May 2002 the safety and security specialists had been able to provide input to only about one-third of Peace Corps’ posts while the Inspector General had issued findings on safety and security practices at only 12 posts over 2 years. In addition, we noted that Peace Corps had no system for tracking post compliance with Inspector General recommendations. We reported that the agency’s guidance was not always clear. The agency’s safety and security framework outlines requirements that posts are expected to comply with but did not often specify required activities, documentation, or criteria for judging actual practices—making it difficult for staff to understand what was expected of them. Many posts had not developed clear reporting and response procedures for incidents, such as responding to sexual harassment. The agency’s coordinator for volunteer safety and security stated that unclear procedures made it difficult for senior staff, including regional directors, to establish a basis for judging the quality of post practices. The coordinator also observed that, at some posts, field-based safety and security officers had found that staff members did not understand what had to be done to ensure compliance with agency policies. The agency has taken steps to reduce staff turnover, improve supervision and oversight mechanisms, and clarify its guidance. In February 2003, Congress passed a law to allow U.S. direct hires whose assignments involve the safety of Peace Corps volunteers to serve for more than 5 years. The Peace Corps Director has employed his authority under this law to designate 23 positions as exempt from the 5-year rule. These positions include nine field-based safety and security officers, the three regional safety and security desk officers working at agency headquarters, as well as the crime data analyst and other staff in the headquarters office of safety and security. They do not include the associate director for safety and security, the compliance officer, or staff from the office of the Inspector General. Peace Corps officials stated that they are about to hire a consultant who will conduct a study to provide recommendations about adding additional positions to the current list. To strengthen supervision and oversight, Peace Corps has increased the number of staff tasked with safety and security responsibilities and created the office of safety and security that centralizes all security- related activities under the direction of a newly created associate directorate for safety and security. The agency’s new crime data analyst is a part of this directorate. In addition, Peace Corps has appointed six additional field-based safety and security officers, bringing the number of such individuals on duty to nine (with three more positions to be added by the end of 2004); authorized each post to appoint a safety and security coordinator to provide a point of contact for the field-based safety and security officers and to assist country directors in ensuring their post’s compliance with agency policies, including policies pertaining to monitoring volunteers and responding to their safety and security concerns (all but one post have filled this position); appointed safety and security desk officers in each of Peace Corps’ three regional directorates in Washington, D.C., to monitor post compliance in conjunction with each region’s country desk officers; and appointed a compliance officer, reporting to the Peace Corps Director, to independently examine post practices and to follow up on Inspector General recommendations on safety and security. In response to our recommendation that the Peace Corps Director develop indicators to assess the effectiveness of the new initiatives and include these in the agency’s annual Government Performance and Results Act reports, Peace Corps has expanded its reports to include 10 quantifiable indicators of safety and security performance. To clarify agency guidance, Peace Corps has created a “compliance tool” or checklist that provides a fairly detailed and explicit framework for headquarters staff to employ in monitoring post efforts to put Peace Corps’ safety and security guidance into practice in their countries, strengthened guidance on volunteer site selection and development, developed standard operating procedures for post emergency action plans, concluded a protocol clarifying that the Inspector General’s staff has responsibility for coordinating the agency’s response to crimes against volunteers. These efforts have enhanced Peace Corps’ ability to improve safety and security practices in the field. The threefold expansion in the field-based safety and security officer staff has increased the agency’s capacity to support posts in developing and applying effective safety and security policies. Regional safety and security officers at headquarters and the agency’s compliance officer monitor the quality of post practices. All posts were required to certify that they were in compliance with agency expectations by the end of June 2003. Since that time, a quarterly reporting system has gone into effect wherein posts communicate with regional headquarters regarding the status of their safety and security systems and practices. The country desks and the regional safety and security officers, along with the compliance officer, have been reviewing the emergency action plans of the posts and providing them with feedback and suggestions for improvement. The compliance officer has created and is applying a matrix to track post performance in addressing issues deriving from a variety of sources, including application of the agency’s safety and security compliance tool and Inspector General reports. The compliance officer and staff from one regional office described their efforts, along with field- based safety and security staff and program experts from headquarters, to ensure an adequate response from one post where the Inspector General had found multiple safety and security weaknesses. However, efforts to put the new system in place are incomplete. As already noted, the agency has developed, but not yet introduced, an improved system for collecting and analyzing crime data. The new associate director of safety and security observes that the agency’s field-based safety and security officers come from diverse backgrounds and that some have been in their positions for only a few months. All have received training via the State Department’s bureau of diplomatic security. However, they are still employing different approaches to their work. Peace Corps is preparing guidance for these officers that would provide them with a uniform approach to conducting their work and reporting the results of their analyses, but the guidance is still in draft form. The Compliance Officer has completed detailed guidance for crafting emergency action plans, but this guidance was distributed to the field only at the beginning of this month. Moreover, following up on our 2002 recommendation, the agency’s Deputy Director is heading up an initiative to revise and strengthen the indicators that the agency uses to judge the quality of all aspects of its operations, including ensuring volunteer safety and security, under the Government Performance and Results Act. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information regarding this statement, please contact Phyllis Anderson, Assistant Director, International Affairs and Trade, at (202) 512-7364 or andersonp@gao.gov. Individuals making key contributions to this statement were Michael McAtee, Suzanne Dove, Christina Werth, Richard Riskie, Bruce Kutnick, Lynn Cothern, and Martin de Alteriis. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
About 7,500 Peace Corps volunteers currently serve in 70 countries. The administration intends to increase this number to about 14,000. Volunteers often live in areas with limited access to reliable communications, police, or medical services. As Americans, they may be viewed as relatively wealthy and, hence, good targets for crime. In this testimony, GAO summarizes findings from its 2002 report Peace Corps: Initiatives for Addressing Safety and Security Challenges Hold Promise, but Progress Should be Assessed, GAO-02-818 , on (1) trends in crime against volunteers and Peace Corps' system for generating information, (2) the agency's field implementation of its safety and security framework, and (3) the underlying factors contributing to the quality of these practices. The full extent of crime against Peace Corps volunteers is unclear due to significant under-reporting. However, Peace Corps' reported rates for most types of assaults have increased since the agency began collecting data in 1990. The agency's data analysis has produced useful insights, but additional analyses could help improve anti-crime strategies. Peace Corps has hired an analyst to enhance data collection and analysis to help the agency develop better-informed intervention and prevention strategies. In 2002, we reported that Peace Corps had developed safety and security policies but that efforts to implement these policies in the field had produced varying results. Some posts complied, but others fell short. Volunteers were generally satisfied with training. However, some housing did not meet standards and, while all posts had prepared and tested emergency action plans, many plans had shortcomings. Evidence suggests that agency initiatives have not yet eliminated this unevenness. The inspector general continues to find shortcomings at some posts. However, recent emergency action plan tests show an improved ability to contact volunteers in a timely manner. In 2002, we found that uneven supervision and oversight, staff turnover, and unclear guidance hindered efforts to ensure quality practices. The agency has taken action to address these problems. To strengthen supervision and oversight, it established an office of safety and security, supported by three senior staff at headquarters, nine field-based safety and security officers, and a compliance officer. In response to our recommendations, Peace Corps was granted authority to exempt 23 safety and security positions from the "5-year rule"--a statutory restriction on tenure. It also adopted a framework for monitoring post compliance and quantifiable performance indicators. However, the agency is still clarifying guidance, revising indicators, and establishing a performance baseline.
Historically, new weapon systems have been developed by the military services to counter specific threats. Under DOD’s Requirements Generation System, the precursor to JCIDS, requirements frequently grew out of the military services’ unique strategic visions and often lacked clear linkages to the national military strategy and the needs of the joint force commanders, who are responsible for carrying out military operations. This service-centric, stovepiped approach often created weapon systems that lacked interoperability, were duplicative, or did not fill critical gaps. In a 2002 memo, the Secretary of Defense expressed dissatisfaction with the requirements system and commented that the system “continues to require things that ought not to be required, and does not require things that need to be required.” As part of its 2001 Quadrennial Defense Review, DOD determined that the department needed to shift from threat-based defense planning to a capabilities-based model that focuses more on how an adversary might fight than who the adversary might be or where a war might be fought. JCIDS was established to provide the department with an integrated, collaborative process to identify and guide development of a broad set of new capabilities that address the current and emerging security environment. Through JCIDS, capabilities are to be developed from national military strategy and should relate to joint concepts that describe how the strategy will be implemented. JCIDS is also intended to ensure a strong voice for warfighters and identify needs from a joint perspective to ensure that current and future warfighters are provided the capabilities they need to accomplish assigned missions. Furthermore, JCIDS emphasizes that needs be derived in terms of capabilities instead of specific system solutions. The JCIDS process is overseen by the Joint Requirements Oversight Council (JROC) and supports the Chairman of the Joint Chiefs of Staff, who is responsible for advising the Secretary of Defense on the priorities of military requirements in supporting the national military strategy. Within JCIDS, FCBs—headed by a general or an admiral and made up of military and civilian representatives from the military services, joint staff, COCOMs, and the Office of the Secretary of Defense—manage different capability area portfolios. The FCBs are intended to support the JROC by evaluating capability needs, recommending enhancements to capabilities integration, examining joint priorities, assessing program alternatives, and minimizing duplication of effort across the department. The JCIDS process requires that gaps in military capabilities be identified and potential materiel and nonmateriel solutions for filling those gaps be developed based on formal capability assessments. The results of these capability assessments are formally submitted as initial capabilities documents (ICD)—a capability proposal—by a military service, defense agency, COCOM, FCB, or other sponsor. ICDs are intended to document a specific capability gap or set of gaps that exist in joint warfighting functions and propose a prioritized list of various solutions to address the gap(s). When a capability proposal is submitted, a Joint Staff “gatekeeper” conducts an initial review to determine what level of joint interest and review there should be and which FCB should take the lead. Capability proposals deemed to have a significant impact on joint warfighting, such as those involving potential major defense acquisition programs, are designated as “JROC interest” and must be validated or approved by the JROC. A JROC-validated ICD provides the basis for starting a major weapon system acquisition. Specifically, it should lead to an analysis of alternatives, a concept refinement phase, and a decision on a preferred system concept. Before a weapon system program is approved to begin system development, the sponsor is required to submit a capability development document (CDD)—which defines a specific solution as identified in the analysis of alternatives—through JCIDS for approval by the JROC. The CDD defines the system’s key performance parameters or attributes against which the delivered increment of capability will be measured. Finally, the sponsor prepares a capability production document (CPD) to address the production elements of an acquisition program prior to the program starting production. Figure 1 shows how the documentation relates to the major milestones for a weapon system program in the Defense Acquisition System. While JCIDS is intended to determine needs from a joint, departmentwide perspective, capability needs continue to be proposed and defined primarily by the military services, with little involvement from the joint community—including the COCOMs, which plan and implement military operations. This can lead to stovepiped and duplicative solutions that do not necessarily support a joint force on the battlefield. In addition, virtually all of the proposals for new capability needs and weapon system solutions completing the JCIDS process since 2003 have been validated. The JCIDS process has also proven to be lengthy, taking on average up to 10 months to validate a need. Such a protracted process further undermines the department’s efforts to effectively respond to the needs of the warfighter, especially those that are near term. Our review of the documentation associated with 90 “JROC interest” ICDs submitted to JCIDS since 2003 showed that 60 proposals, or 67 percent, were sponsored by a military service, and 23, or 26 percent, were sponsored by a COCOM, an FCB, or the Joint Staff. (See fig. 2.) JCIDS is intended to encourage collaboration among the services, COCOMs, and other DOD organizations to identify joint solutions to capability gaps, and there are some cases where this has occurred. For example, the Navy submitted a capability proposal through JCIDS to get a precision and landing system in place to avoid delays in delivering its aircraft carriers in development. The lead FCB reviewed the Navy’s proposal and recognized that it was similar to a need identified by the Air Force and determined that the Air Force’s needs could be met under the same proposal. However, according to JCIDS officials, FCB, COCOM, and other stakeholder reviews have had little influence in promoting joint solutions. Past studies have also raised concerns that the services and the COCOMs do not routinely collaborate to identify possible joint solutions. For example, in 2006 the Army Audit Agency recommended that the Army improve collaboration with the joint community early in the capabilities planning process to improve the quality of its capabilities documents and facilitate more timely reviews of proposals that are submitted into the JCIDS process. In January 2006, the Defense Acquisition Performance Assessment Panel concluded that JCIDS resulted in capabilities that did not meet warfighter needs in a timely manner and recommended that JCIDS be replaced with a COCOM-led requirements process in which the services and defense agencies compete to provide solutions. The Defense Science Board similarly reported that JCIDS has not provided for increased warfighter influence, but instead actually suppresses joint needs in favor of military service interests, and recommended an increase in the formal participation role of the COCOMs in the JCIDS process. The Center for Strategic and International Studies has also pointed out that while the services are responsible for supplying operationally capable armed forces, the COCOMs are responsible for responding to threats and executing military operations. Therefore, it recommended that the Joint Forces Command take the lead in conducting capabilities development planning for the COCOMs and become a formal member of the JROC. By continuing to rely on stovepiped solutions to address capability needs, DOD may be losing opportunities to improve joint warfighting capabilities and reduce the duplication of capabilities in some areas. In January 2006, we reported that military operations continue to be hampered by the inability of communication and weapon systems to operate effectively together on the battlefield. In May 2007, we reported that while the military services have successfully planned and fielded a number of unmanned aerial vehicle systems over the past several years, DOD has struggled to coordinate the development of these systems across the services and ensure that they complement one another and avoid duplicating capabilities. Specifically, despite similarities in proposed capabilities between two key unmanned aerial vehicle systems—the Air Force’s Predator program and the Army’s Warrior program—the Army awarded a separate development contract to the same contractor producing the Predator. By taking separate tracks to developing these two systems, the Air Force and the Army missed an opportunity to identify potential similarities in their requirements and thereby avoid redundant or non-interoperable systems. Although the Army and Air Force agreed to consider cooperating on the acquisition of the two systems, the services are struggling to agree on requirements. JCIDS is intended to support senior decision makers in identifying and prioritizing warfighting capability needs. As such, it is meant to be an important tool in maintaining a balanced portfolio of acquisition programs that can be executed within available resources. However, the vast majority of proposals completing the JCIDS process are approved—or validated. Adding to a portfolio that already contains more programs than resources can support is likely to perpetuate instability and poor outcomes in weapon system programs. Of the 203 JROC-interest capability proposals (ICDs and CDDs) we reviewed, 140 completed the JCIDS process and were validated. Of the remaining proposals, 57 are still under review, and 6 are considered inactive (see fig. 3). According to a Joint Staff representative, some proposals are returned to sponsors for modifications because the supporting documentation lacked sufficient analysis to justify the capability gap and solutions being presented, or because reviewers raised other technical concerns that needed to be resolved. Returned proposals are usually modified and resubmitted to the JCIDS process. The 6 proposals that are considered inactive were not resubmitted by the sponsors. According to JCIDS officials, proposals are not prioritized across capability and mission areas. Instead, the extent to which any prioritization has occurred within JCIDS has been limited to the key performance parameters or requirements within individual capability proposals. For example, the Special Forces Command wanted to add capabilities to a Navy-sponsored JCIDS proposal—described in a CDD— for a high-speed intratheater surface lift capability to transport military units and supplies into shallow and remote areas. However, addressing a key capability requested by the Special Forces Command—to land a V-22 aircraft on the surface ship—would have necessitated a major redesign for the proposed Navy ship and delayed providing capabilities to the warfighter by several years. While the JROC agreed that the Special Forces Command’s requirement was valid, it decided to approve the Navy capability proposal without the Special Forces Command requirement and requested that a study be undertaken to identify how this requirement could be addressed in the future. The lack of early prioritization of capability needs through JCIDS makes it difficult for DOD to balance its portfolio of weapons programs. Validated proposals tend to gain momentum and win approval to become formal weapon system programs—in part because other reviews are not conducted prior to the start of system development and demonstration, or Milestone B. In prior work, we found that 80 percent of the programs we reviewed entered the acquisition system at Milestone B without a Milestone A or other prior major review. By this time, the military services have already established a budget and formed a constituency for their individual capability needs. Successful commercial companies we have reviewed value and use a disciplined approach to prioritize needs early and often—one that views potential product development programs as related parts of a companywide portfolio. These companies make tough decisions to defer or say no to proposed products and achieve a balanced portfolio—one that matches requirements with resources and weighs near- and long-term needs. Since JCIDS was implemented, the number of major defense acquisition programs in DOD’s portfolio has increased from 77 to 93, or by 21 percent. This increase is likely to exacerbate an already sizable disparity between what programs are expected to cost and available funding. The estimated acquisition costs remaining for major weapon system programs increased 130 percent from fiscal year 2000 through fiscal year 2007, while the annual funding for these programs increased by a more modest 67 percent (see fig. 4). During the same time frame, the remaining costs for the major weapon systems in DOD’s portfolio went from being about four times greater to almost six times greater than annual funding. Shortfalls as significant as this are likely to be fiscally unsustainable. As we recently reported, to compensate for funding shortfalls, DOD has made unplanned and inefficient program adjustments—including shifting funding between programs, deferring work and associated costs into the future, or cutting procurement quantities. Such reactive practices contribute to the instability of many programs and undesirable acquisition outcomes. The JCIDS process may lack the efficiency and agility needed to respond to warfighter needs—especially those that are near term—because the review and validation of capability proposals can take a significant amount of time. A proposal submitted to JCIDS can go through several review and comment resolution phases before consensus is reached on the proposal, and through several levels of approval before the JROC validates the proposal. Our review of capability proposals submitted to JCIDS from fiscal years 2003 through 2008 found that review and validation takes on average 8 to 10 months (see fig. 5). JCIDS and service officials also indicated that prior to submitting a JCIDS proposal, the sponsor can take a year or more to complete a capabilities-based assessment and get a proposal approved. In other words, 2 years or more can elapse from the time a capability need is identified by a sponsor to the time the capability is validated by the JROC. Given the size and complexity and level of funding that will be committed to many of these capability needs, the length of the process may be warranted. However, concerns have been raised by officials within the department about how responsive JCIDS can be—concerns that may prompt some sponsors to bypass the process. According to some department officials, too much time is spent reviewing individual capability proposals with little evidence of increased attention to prioritization or jointness. Senior COCOM officials we spoke with also stated that the JCIDS process is not conducive to addressing near-term requirements—the primary focus of the COCOMs—and that the lengthy nature of the JCIDS process makes it difficult to adjust to emerging needs. In one case, the Army used extraordinary measures, going outside DOD’s normal requirements, acquisition, and budgeting process to acquire and field the Joint Network Node-Network (JNN-N)—a $2 billion, commercial- based system designed to improve satellite communication capabilities for deployed military units in Afghanistan and Iraq. While JNN-N provided enhanced capability for the warfighter, the work-around allowed the Army to bypass the management and oversight typically required of DOD programs of this magnitude. In 2005, DOD established the Joint Urgent Operational Need (JUON) process to respond to urgent needs associated with combat operations in Afghanistan and Iraq and the war on terror. The JUON process is intended to prevent mission failure or loss of life and is generally considered to be more efficient than JCIDS for meeting urgent needs. However, short-term needs that do not qualify as urgent operational needs—such as JNN-N—must still go through JCIDS. DOD lacks the necessary framework for more effective implementation of JCIDS. The department has not yet developed a structured, analytical approach to prioritize capability proposals submitted to the JCIDS process. Additionally, the FCBs, which were established to manage the JCIDS process, do not have the capacity to effectively take the lead in prioritizing capability needs. Without an approach and entity in charge to determine what capabilities are needed, all proposals tend to be treated as priorities within the JCIDS process. The Joint Staff has recently taken steps to improve the prioritization of capability needs across DOD. DOD’s failure to prioritize capability needs through the JCIDS process is due in part to the lack of an analytic framework to determine and manage capability needs from a departmentwide perspective. To date, JCIDS largely responds to capability proposals that are submitted by component sponsors on a case-by-case basis. Lacking a more proactive approach, JCIDS has been ineffective at integrating and balancing needs from the military services, COCOMs, and other defense components. DOD has several different approaches to identify capability needs but they do not appear to be well integrated with JCIDS. For example, each COCOM submits annually to the Chairman of the Joint Chiefs of Staff an integrated priority list, which defines the COCOM’s highest-priority capability gaps for the near term, including shortfalls that may adversely affect COCOM missions. However, it is unclear to what extent integrated priority lists or other approaches, such as JUONs and lessons learned from recent and ongoing military operations, inform the JCIDS process. According to officials from several COCOMs, needs identified through integrated priority lists are not typically developed into JCIDS capability proposals. These officials indicated that to be successful in getting a need addressed, they have to build a coalition with one or more services that may have similar needs. At the same time, the military services continue to drive the determination of capability needs, in part because they retain most of DOD’s analytical capacity and resources for requirements development. According to Air Force and Army officials, they have several hundred staff involved in capabilities planning and development. In contrast, the FCBs are relatively small, with the majority having 12 or fewer staff members. FCB officials noted that the assessments that must be conducted to support a capability proposal can cost several million dollars and require several staff years of effort. Consequently, the FCBs only sponsored five capability development proposals over the last 5 years and generally devote most of their time and effort to reviewing documents submitted by sponsors and providing recommendations on them to the JROC. In March 2008, we reported that the FCB responsible for intelligence, surveillance, and reconnaissance capabilities lacked sufficient resources to engage in early coordination with sponsors and review the sponsors’ capability assessments. Representatives from several of the FCBs also indicated that they lack the expertise to effectively weigh in on the technical feasibility and costs of sponsors’ capability proposals and identify trade-offs that may be needed to modify proposals. A study performed under contract for the Joint Staff in July 2007 also found that some FCBs were under resourced for performing their duties. COCOMs, particularly the regional commands, also lack analytic capacity and resources to become more fully engaged in JCIDS—either by developing their own capability assessments or participating in reviews and commenting on proposals submitted to JCIDS. Some COCOM officials pointed out that because of their limited resources, they must pick and choose capability proposals to get involved in. Several studies have recommended that DOD increase joint analytic resources for a less stovepiped understanding of warfighting needs. In 2006, the JROC developed a most pressing military issues list in an effort to identify the most important high-level issues facing the department and thereby provide better guidance to sponsors and FCBs on what capability assessments to focus on. In addition, the JROC directed the FCBs to develop and implement an approach to synthesize the COCOMs’ annual integrated priority lists and bring greater focus to prioritizing joint capability needs. This resulted, in 2007, in a consolidated list of capability needs. The JROC has also increased its involvement with the COCOMs through regular trips and meetings to discuss capability needs and resourcing issues. According to joint staff officials, these efforts have helped the JROC gain an increased understanding of the COCOMs’ needs as well as provided the COCOMs with a forum for communicating their needs. Officials from several COCOMs noted that many of the near- term needs reflected in their integrated priority lists are now being addressed more effectively through annual budget adjustments and force structure changes. At the direction of the Deputy Secretary of Defense, the Joint Staff has also recently begun a project to provide a more systematic approach to prioritizing capability areas and gaps that need to be addressed across the department. This effort is intended to identify the near-, mid-, and long- term needs of the military services and other defense components and synthesize them with the needs of the COCOMs. The project’s first step, which is expected to be completed by the Joint Staff by the end of 2008, focuses on establishing what capabilities are most important to carrying out military operations either now or in the future. Capability areas will then be assessed to identify and prioritize where deficiencies or gaps in capabilities exist, and where additional capabilities may or may not be needed. The framework being used in the project is similar to one that the Institute for Defense Analysis developed with the U.S. Pacific Command a few years ago to strengthen the analytical basis for the integrated priority lists. The framework used by U.S. Pacific Command links capability needs to elements of the operational plans that the command is responsible for executing. Capability needs are determined by consolidating the views of operational planners, capability developers, and other subject matter experts from within the command. If the project achieves expected results, the FCBs—and ultimately, the JROC—would be able to screen new capabilities proposals during the JCIDS review process while having knowledge of the capacity and sufficiency of existing requirements. According to Joint Staff officials, however, there are key challenges to implementing the project and coming up with a credible prioritization of capability needs. A major challenge will be to determine how best to integrate service and COCOM capability perspectives that are typically based on different roles, missions, and time frames. The military services tend to address capabilities in terms of defense planning scenarios that identify the mid- and long-term challenges the department must be prepared to handle. This has led to the development of capability proposals that advocate the need for the “next generation” of weapon system capability. In contrast, the COCOMs tend to address capabilities in terms of being able to execute operational plans they have developed for assigned missions in their geographic areas of responsibility. As such, the COCOMs’ focus has been on current and near- term needs. The Center for Strategic and International Studies and others have advocated that mid- and long-term capability planning capacity is needed for COCOMs and that the functional COCOMs should perhaps play a stronger role in representing the regional COCOMs. Another challenge will be in developing appropriate criteria and measures for identifying capability gaps and determining the relative importance of these needed capabilities. Such criteria and measures have generally been lacking in the JCIDS process. Adjustments have also been made to try to streamline the JCIDS process to reduce the time it typically takes to validate capability proposals. One recent change to the process means a sponsor does not have to submit a CPD if the program is on track and there are no changes since the CDD was validated. In addition, the Joint Staff has been tracking the amount of time it takes to get through the various review and comment phases of JCIDS and implemented measures to speed up the adjudication of reviewers’ comments on capability proposals. As a result, there has been some improvement in reducing the time it takes to validate capability proposals. For example, we found that capability proposals (ICDs and CDDs) took about 9.5 months to be validated during 2003 to 2005 compared to about 8 months during 2006 to 2008. The Joint Staff has also recognized that the definitions used to determine what capability proposals must be brought to the JROC for approval is too broad and some proposals could be delegated to other authorities for validation. The definitions are being modified in part to focus JROC oversight on proposals that may truly warrant JROC involvement. Furthermore, the JROC is considering delegating authority for some JROC-interest capability proposals to lower levels, such as the Joint Capabilities Board and the FCBs. By establishing JCIDS, DOD has, to some extent, recognized the need to better ensure that joint warfighting needs can be addressed within fiscal resource constraints. However, the process has not proven to be an effective approach to increase the level of joint participation or to prioritize the capability needs of the services, COCOMs, and other DOD components. While DOD has begun initiatives to improve JCIDS, the department continues to lack an analytic approach and an appropriate alignment of resources to balance competing capability needs. Consequently, DOD continues to start more weapons programs than current and likely future financial resources can support and miss opportunities to improve joint warfighting capabilities. Until JCIDS evolves from a service-centric process to a process that balances service and joint near-, mid-, and long-term capability needs, DOD will continue to contend with managing a portfolio that does not match available resources and risk failing to provide joint capabilities needed by the warfighter. We recommend that the Secretary of Defense direct the Chairman of the Joint Chiefs of Staff to develop an analytic approach within JCIDS to better prioritize and balance the capability needs of the military services, COCOMs, and other defense components. The Joint Staff should consider whether current efforts—particularly, the capabilities prioritization project—should be adopted as a framework for this approach. The approach should also establish appropriate criteria and measures for identifying capability gaps and determining the relative importance of near-, mid-, and long-term capability needs. Ultimately, the approach should provide a means to review and validate proposals more efficiently and ensure that the most important capability needs of the department are being addressed. We also recommend that the Secretary of Defense determine and allocate appropriate resources for joint capabilities development planning. In so doing, the Secretary should consider whether the responsibility and capacity of the COCOMs and FCBs to conduct joint capabilities development planning should be increased, whether one or more of the functional COCOMs should be given the responsibility and capacity to conduct joint capabilities development planning, and whether resources currently residing within the military services for capabilities development planning should be shifted to the COCOMs and FCBs. In written comments on a draft of this report, DOD partially concurred with our first recommendation and concurred with the second recommendation. DOD’s partial concurrence with our first recommendation—that an analytic approach be developed within JCIDS to better prioritize and balance the capability needs of the military services, COCOMs, and defense components—is based on the premise that prioritization occurs through several existing processes in the department, and that JCIDS is not intended to be the primary means of prioritizing. DOD’s concurrence with our second recommendation—to determine and allocate appropriate resources for joint capabilities development planning—is based on its position that resources are adequate and have been allocated appropriately. The department’s response to both of our recommendations leads us to conclude that it does not see a need to improve its ability to prioritize and balance joint capability needs. In commenting on our first recommendation, DOD pointed out that identifying, prioritizing, and balancing joint capability needs occurs through multiple processes both within and outside of JCIDS, such as COCOM integrated priority lists and JUONs, as well as through the department’s budgeting and acquisition systems. We acknowledge that these DOD processes play a role in delivering capabilities to the warfighter; however, as we note in our report, these processes do not appear to be well integrated with JCIDS. Regardless, DOD established JCIDS as the principal process to support senior decision makers in identifying, assessing, and prioritizing joint warfighting needs. The process was intended to move the department away from a service-centric, stovepiped approach to a joint approach that helps ensure that COCOMs are provided the capabilities needed to carry out military operations. However, many of the COCOMs do not believe that their needs are sufficiently addressed through JCIDS and there is no evidence that the process has achieved its intended goals. In fact, capability proposals submitted through JCIDS are not prioritized and largely continue to reflect insular interests. Unless an analytic approach to prioritize and balance the capability needs of the services, COCOMs, and other defense components is established, DOD will continue losing opportunities to strengthen joint warfighting capabilities and constrain its portfolio of weapon system programs. Given that JCIDS was established for this purpose, it seems logical to build such an approach within JCIDS. In concurring with our second recommendation, DOD asserts that the resources currently allocated for joint capabilities development planning are appropriate. However, while the FCBs may be sufficiently resourced to review capability proposals submitted by sponsors into JCIDS, they lack the resources and capacity to play a leading role in defining and prioritizing joint capability needs for their functional capability areas. In addition, while the JCIDS process provides opportunities for their participation, the COCOMs lack the resources and analytic capacity to conduct their own capability assessments or review proposals submitted by other sponsors. Several other recent studies similarly indicated that the COCOMs are underrepresented in the department’s efforts to determine joint capabilities. We continue to believe that a better alignment of resources for conducting joint capabilities planning—among the services, FCBs, and COCOMs—would help the department to more effectively prioritize and balance competing capability needs. DOD also provided information about recent initiatives that are being implemented to improve the JCIDS, budgeting, and acquisition processes, and to strengthen the involvement of the joint community in determining capability needs. For example, since completing our draft report, the JROC moved to give the COCOMs a greater voice in the JCIDS process by delegating responsibility for validating requirements in the command and control functional area to the Joint Forces Command. While this initiative and others appear promising, as DOD notes, it is too early to determine whether the full benefits of these initiatives will be realized. In addition, DOD commented that our report did not sufficiently recognize the extent of joint participation that occurs through the JCIDS process. DOD stated that many of the services’ proposals are in direct response to capability gaps identified by the COCOMs and that the JCIDS process is structured to provide the joint community multiple opportunities and time to review proposals and ensure that they correctly state the needs of the joint warfighter. While we agree that some proposals submitted to JCIDS do address joint needs, the services still largely drive the vast majority of capability needs that are pursued in the department. Furthermore, once proposals are submitted to JCIDS, there is little evidence of increased attention to prioritization or jointness that results from the review of these proposals. DOD’s letter, with its written comments and description of new initiatives, is reprinted in appendix IV. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; and the Director of the Office of Management and Budget. We will provide copies to others on request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report or need additional information, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were John Oppenheim, Assistant Director; John Krump; Sean Seales; Karen Sloan; and Don Springman. To determine whether the Joint Capabilities Integration and Development System process has achieved its objective to prioritize joint warfighting needs, we analyzed information and capability documents contained in the Joint Staff’s Knowledge Management/Decision Support tool compiled since the inception of JCIDS. First, we determined how many capability documents—initial capabilities documents (ICD) and capability development documents (CDD)—were designated “JROC-interest,” which are defined as all Acquisition Category (ACAT) I programs and other programs whose capabilities have a significant impact on joint warfighting. We identified a total of 203 capability documents—90 ICDs and 113 CDDs. We then analyzed and determined whether the capability documents were sponsored by the joint community, military services, and other Department of Defense (DOD) agencies. In addition, we determined which documents had completed the JCIDS process and been validated, which had completed the process and are inactive, and which are still under review. We also determined the amount of time required for capability documents to complete the JCIDS process and the amount of time other documents have remained in the process. We also reviewed Joint Requirements Oversight Council (JROC) memorandums validating requirements documents to determine if requirements were assigned a priority upon validation. Further, we reviewed budgeted and projected program costs for major defense acquisitions reported by DOD’s Selected Acquisition Report summary tables for the years 2000 to 2007, covering periods before and after the inception of JCIDS. To identify factors affecting DOD’s ability to effectively implement JCIDS, we analyzed the existing structure of the JCIDS process and evaluated the sufficiency of the Joint military community workforce for preparing and reviewing JCIDS requirements documents. We provided written questionnaires to functional capability boards (FCB) to determine staffing and resource levels. We also evaluated recent DOD initiatives designed to improve the JCIDS process. In researching both of our primary objectives, we interviewed officials from the Joint Staff; DOD’s FCBs; U.S. Special Operations Command; U.S. Joint Forces Command; U.S. Pacific Command; U.S. Central Command; Department of the Air Force; Department of the Navy; and Department of the Army. We reviewed statements made by DOD officials in prior congressional testimony. We reviewed prior GAO and other audit reports as well as DOD-sponsored studies related to JCIDS that were conducted by the Center for Strategic and International Studies, the Institute for Defense Analyses, the Defense Acquisition Performance Assessment Project, the Defense Science Board, and Booz Allen Hamilton. We reviewed guidance and regulations issued by the Joint Staff, the military services, and DOD, as well as other DOD-produced documentation related to JCIDS. We conducted this performance audit from May 2007 to August 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Nine FCBs have been established by the JROC to evaluate issues impacting their respective functional areas and provide subject matter expertise to the JROC. The assigned functional areas and sponsoring organizations of the FCBs are shown in table 1. FCBs assist the JROC in overseeing capabilities development within JCIDS, to include assessment of ICDs, CDDs, and CPDs. FCBs can only make recommendations, and are not empowered to approve or disapprove of proposals. There are currently 10 unified combatant commands (COCOM) serving as DOD’s operational commanders—6 with geographic responsibilities and 4 with functional responsibilities. The 6 COCOMs with geographic responsibilities are U.S. Africa Command, U.S. Central Command, U.S. European Command, U.S. Northern Command, U.S. Pacific Command, and U.S. Southern Command. Their geographic areas of responsibility are shown in figure 6. The four functional COCOMs are U.S. Joint Forces Command, which engages in joint training and force provision; U.S. Special Operations Command, which trains, equips, and deploys special operations forces to other COCOMs and leads counterterrorist missions worldwide; U.S. Strategic Command, whose missions include space and information operations, missile defense, global command and control, intelligence, surveillance, and reconnaissance, strategic deterrence, and integration and synchronization of DOD’s departmentwide efforts in combating weapons of mass destruction; and U.S. Transportation Command, which provides air, land, and sea transportation for DOD.
Increasing combat demands and fiscal constraints make it critical for the Department of Defense (DOD) to ensure that its weapon system investments not only meet the needs of the warfighter, but make the most efficient use of available resources. GAO's past work has shown that achieving this balance has been a challenge and weapon programs have often experienced cost growth and delayed delivery to the warfighter. In 2003, DOD implemented the Joint Capabilities Integration and Development System (JCIDS) to prioritize and ensure that the warfighter's most essential needs are met. In response to Senate Report 109-69, GAO reported in March 2007 that DOD lacks an effective approach to balance its weapon system investments with available resources. This follow-on report focuses on (1) whether the JCIDS process has achieved its objective to prioritize joint warfighting needs and (2) factors that have affected DOD's ability to effectively implement JCIDS. To conduct its work, GAO reviewed JCIDS guidance and capability documents and budgetary and programming data on major weapon systems, and interviewed DOD officials. The JCIDS process has not yet been effective in identifying and prioritizing warfighting needs from a joint, departmentwide perspective. GAO reviewed JCIDS documentation related to proposals for new capabilities and found that most--almost 70 percent--were sponsored by the military services, with little involvement from the joint community--including the combatant commands (COCOMs), which are largely responsible for planning and carrying out military operations. By continuing to rely on capability proposals that lack a joint perspective, DOD may be losing opportunities to improve joint warfighting capabilities and reduce the duplication of capabilities in some areas. In addition, virtually all capability proposals that have gone through the JCIDS process since 2003 have been validated--or approved. DOD continues to have a portfolio with more programs than available resources can support. For example, the remaining costs for major weapon system programs in DOD's portfolio went from being about four times greater to almost six times greater than annual funding available during fiscal year 2000 through 2007. The JCIDS process has also proven to be lengthy--taking on average up to 10 months to validate a need--which further undermines efforts to effectively respond to the needs of the warfighter, especially those that are near-term. DOD lacks an analytical approach to prioritize joint capability needs and determine the relative importance of capability proposals submitted to the JCIDS process. Further, the functional capabilities boards, which were established to manage the JCIDS process and facilitate the prioritization of needs, have not been staffed or resourced to effectively carry out these duties. Instead, the military services retain most of DOD's analytical capacity and resources for requirements development. The Joint Staff recently initiated a project to capture the near-, mid-, and long-term needs of the services and other defense components, and to synthesize them with the needs of the COCOMs. However, DOD officials told us that determining how best to integrate COCOM and service capability perspectives will be challenging because of differences in roles, missions, and time frames. Efforts have also begun to streamline the process and reduce the time it takes to validate proposals.
The government has been providing housing assistance in rural areas since the 1930s. At that time, most rural residents worked on farms, and rural areas were generally poorer than urban areas. For example, in the 1930s very few rural homes had electricity or indoor plumbing. Accordingly, the Congress authorized housing assistance specifically for rural areas and made USDA responsible for administering it. However, rural demographic and economic characteristics have greatly changed over time. By the 1970s virtually all rural homes had electricity and indoor plumbing. Today, less than 2 percent of the nation’s population lives on farms, and advances in transportation, technology, and communications have – or have the potential to – put rural residents in touch with the rest of the nation. The federal role has also evolved, with HUD, the Department of Veterans Affairs (VA), and state housing finance agencies becoming significant players in administering housing programs. Homeownership in the United States is at an all-time high with 68 percent of the nation’s households owning their own home. In rural areas, the homeownership rate is even higher — 76 percent. However, according to the Housing Assistance Council, affordability is the biggest problem facing low-income rural households. Rural housing costs have increased and income has not kept pace, especially for rural renters who generally have lower incomes than owners. As a result, rural renters are more likely to have affordability problems and are twice as likely as rural owners to live in substandard housing. Although the physical condition of rural housing has greatly improved over time, it still lags somewhat behind that of urban housing. The most severe rural housing quality problems are found farthest from the nation’s major cities, and are concentrated in four areas in particular: the Mississippi Delta, Appalachia, the Colonias on the Mexican border, and on Indian trust land. Minorities in these areas are among the poorest and worst housed groups in the nation, with disproportionately high levels of inadequate housing conditions. Migrant farm workers in particular have difficulty finding affordable, livable housing. The higher incidence of housing quality problems, particularly in these four areas, offsets many of the advantages of homeownership, including the ability to use homes as investments or as collateral for credit. USDA’s Farmers Home Administration managed rural housing programs and farm credit programs until reorganization legislation split these functions in 1994. Farm credit programs were then shifted to the new Farm Service Agency. Housing programs were moved to the newly created RHS in the new Rural Development mission area which was tasked with helping improve the economies of rural communities. RHS currently employs about 5,500 staff to administer its single family, multifamily, and community facilities programs. RHS’s homeownership programs provide heavily subsidized direct loans to households with very low and low incomes, guaranteed loans to households with low and moderate incomes, and grants and direct loans to low-income rural residents for housing repairs. Multifamily programs provide direct and guaranteed loans to developers and nonprofit organizations for new rental housing that is affordable to low and moderate income tenants; grants and loans to public and nonprofit agencies and to individual farmers to build affordable rental housing for farm workers; housing preservation grants to local governments, nonprofit organizations, and Native American tribes; and rental assistance subsidies that are attached to about half the rental units that RHS has financed. In addition, RHS administers community facilities programs that provide direct and guaranteed loans and grants to help finance rural community centers, health care centers, child care facilities, and other public structures and services. For fiscal year 2003, RHS received an appropriation of $1.6 billion. Of this amount, the largest share, $721 million, is for its rental assistance program. Congress also authorized about $4.2 billion for making or guaranteeing loans, primarily for guaranteeing single-family loans. RHS oversees an outstanding single-family and multifamily direct loan portfolio of about $28 billion. Table 1 lists RHS’s programs, briefly describes them, and compares the spending for them in fiscal year 1999 with the spending for them in fiscal years 1979 and 1994. The table also shows that, although RHS’s single and multifamily guaranteed programs are relatively new, by 1999 RHS had guaranteed more single- and multifamily loans than it made directly. While RHS administers its programs in rural areas, HUD, VA, and state housing finance agencies provide similar programs nationwide, including assistance to households that may be eligible for RHS programs in rural areas. For example, RHS’s single-family loan guarantee program serves moderate-income homebuyers as does the Federal Housing Administration’s (FHA) much larger single-family insurance program. VA and most state housing finance agencies also offer single-family loan programs. In the multifamily area, HUD’s multifamily portfolio is similar to RHS’s multifamily portfolio and HUD’s project-based section 8 program operations parallel RHS’s rental assistance program. Further, in contrast to RHS, HUD has more established systems for assessing the quality of its multifamily portfolio through its Real Estate Assessment Center (REAC) and for restructuring financing and rental assistance for individual properties through its Office of Multifamily Housing Assistance Restructuring (OMHAR). Given the diminished distinctions between rural and urban areas today, improvements in rural housing quality and access to credit, and RHS’s increasing reliance on guaranteed lending and public/private partnerships, our September 2000 report found the federal role in rural housing is at a crossroads. We listed arguments for and against fundamentally changing the programs’ targeting, subsidy levels, and delivery systems, as well as merging RHS’s programs with HUD’s or other agencies’ comparable programs. A number of arguments have been presented to support continuing RHS’s housing programs separately from HUD and other agencies or for maintaining a separate system for delivering these programs, including the following: Some rural residents need the close supervision offered by RHS local offices because they do not have access to modern telecommunications or other means of obtaining information on affordable housing opportunities; Rural borrowers often need a local service office familiar with their situation in the first year of a loan; Rural areas could lose their federal voice in housing matters; Rural areas could lose the benefits of the lower rates and terms RHS’s direct and guaranteed loan programs currently offer; and HUD and other potential partners have not focused on rural areas. Proponents of arguments for merging RHS’s housing programs with other housing programs or not maintaining a separate system for delivering housing programs in rural areas present a different set of arguments: RHS’s field role has changed from primarily originating and servicing direct loans to leveraging deals with partner organizations; In some states, local banks, nonprofit organizations, social workers, and other local organizations are doing much of the front-line work with rural households that was previously done by RHS staff; The thousands of RHS staff with local contacts could provide a field presence for HUD, and other public partners, applying their leveraging and partnering skills to all communities; and RHS and HUD could combine management functions for their multifamily portfolios that are now provided under separate systems. We also noted that without some prodding, the agencies are not likely to examine the benefits and costs of merging as an option. As a first step toward achieving greater efficiency, we suggested that the Congress consider requiring RHS and HUD to explore the potential benefits of merging similar programs, such as the single-family insured lending programs and the multifamily portfolio management programs, taking advantage of the best practices of each and ensuring that targeted populations are not adversely affected. Since we issued our report in September 2000, it appears that RHS and FHA have shared some mutually beneficial practices. First, RHS’s single- family guaranteed program plans to introduce its automated underwriting capabilities through technology that FHA has already developed and has agreed to share with RHS. Second, RHS, FHA, and VA have collaborated in developing common reporting standards for tracking minority and first- time homeownership statistics. Third, we understand that there have been discussions between RHS and HUD staff on developing a model to restructure RHS section 515 mortgages using techniques that HUD has learned through restructuring similar HUD section 236 mortgages. Our September 2000 report also identified a number of actions that RHS officials and others have identified that could increase the efficiency of existing rural housing programs, whether or not they are merged. I will limit my discussion today to two issues that deal with RHS’s field structure. The first issue involves state delivery systems. When state Rural Development offices were given the authority to develop their own program delivery systems as part of the 1994 reorganization, some states did not change, believing that they needed to maintain a county-based structure with a fixed local presence to deliver one-on-one services to potential homeowners. Other states tried innovative, less costly approaches to delivering services, such as consolidating local offices to form district offices and using traveling loan originators for single-family programs. However, RHS has undergone a major shift in mission during the past few years. RHS is still a lending agency like its predecessor, the Farmers Home Administration, but it now emphasizes community development, and uses its federal funding for rural communities to leverage more resources to develop housing, community centers, schools, fire stations, health care centers, child care facilities, and other community service buildings. Some state Rural Development officials we spoke with questioned the efficiency and cost-effectiveness of maintaining a county- based field structure in a streamlined environment where leveraging, rather than one-on-one lending, has become the focus of the work. For example, the shift in emphasis from direct to guaranteed single-family lending moved RHS from relying on a labor intensive loan generation process to one that relies on private lenders to underwrite loans. When we performed our audit work in 2000 we found that Mississippi, which maintains a county-based Rural Development field structure, had more staff and field offices than any other state but the next to lowest productivity as measured by dollar program activity per staff member. Ohio, however, which ranked fifth in overall productivity, operated at less than one-fifth of Mississippi’s cost per staff member. We recognize that it is more difficult to underwrite single-family loans in the Mississippi Delta and other economically depressed areas than in rural areas generally, and Mississippi does have a substantial multifamily portfolio. Nevertheless, the number of field staff in Mississippi far exceeded that in most other states. Ohio, whose loan originators were based in four offices and traveled across the state with laptop computers, ranked seventh in the dollar value of single-family guaranteed loans made and fifth in the dollar amount per staff member of direct loans made. Ohio had also done a good job of serving all of its counties, while Mississippi had experienced a drop in business in the counties where it had closed local offices. Ohio’s travel and equipment costs had increased with the use of traveling loan originators. The Maine Rural Development office had also fundamentally changed its operational structure, moving from 28 offices before the reorganization to 15 afterwards, and in 2000 it operated out of 3 district offices. The state director at the time, who had also headed the Farmers Home Administration state office in the 1970s, said that he had headed the agency under both models and believed the centralized system to be much more effective. He added that under the new structure, staff could no longer sit in the office waiting for clients to come to them but had to go to the clients. He also maintained that a centralized structure was better suited to building the partnerships with real estate agents, banks, and other financial institutions that had become the core element of RHS’s work. The second issue involves the location of field offices. Consistent with its 1994 reorganization legislation, USDA closed or consolidated hundreds of county offices and established “USDA service centers” with staff representing farm services, conservation, and rural development programs. However, the primary goal of the task team that designed the service centers was to place all the county-based agencies together, particularly those that dealt directly with farmers and ranchers, to reduce personnel and overhead expenses by sharing resources. But while the farm finance functions from the old Farmers Home Administration fit well into the new county-based Farm Service Agency, the housing finance functions that moved to the new state Rural Development offices were never a natural fit in the centers. The decision to collocate Rural Development and Farm Service offices was based on the fact that Rural Development had a similar county-based field structure and the Department needed to fill space in the new service centers. Collocating Rural Development and Farm Service offices designed to serve farmers and ranchers makes less sense today, especially in states where Rural Development operations have been centralized. How to deal with the long-term needs of an aging portfolio is the overriding issue for section 515 properties. In the program’s early years, it was expected that the original loans would be refinanced before major rehabilitation was needed. However, with prepayment and funding restricted, this original expectation has not been realized, and RHS does not know the full cost of the long-term rehabilitation needs of the properties it has financed. RHS field staffs perform annual and triennial property inspections that identify only current deficiencies rather than the long-term rehabilitation needs of the individual properties. As a result, RHS does not know whether reserve accounts will cover long-term rehabilitation needs. Without a mechanism to prioritize the portfolio’s rehabilitation needs, including a process for ensuring the adequacy of individual property reserve accounts, RHS cannot be sure it is spending its limited rehabilitation funds as effectively as possible and cannot tell Congress how much funding it will need to cover the portfolio’s long-term rehabilitation costs. RHS’s state personnel annually inspect the exterior condition of each property financed under the section 515 program and conduct more detailed inspections every 3 years. However, according to RHS guidelines, the inspections are intended to identify current deficiencies, such as cracks in exterior walls or plumbing problems. Our review of selected inspection documents in state offices we visited confirmed that the inspections are limited to current deficiencies. RHS headquarters and state officials confirmed that the inspection process is not designed to determine and quantify the long-term rehabilitation needs of the individual properties. RHS has not determined to what extent properties’ reserve accounts will be adequate to meet long-term needs. According to RHS representatives, privately owned multifamily rental properties often turn over after just 7 to 12 years, and such a change in ownership usually results in rehabilitation by the new owner. However, given the limited turnover and funding constraints, RHS properties primarily rely on reserve accounts for their capital and rehabilitation needs. RHS officials are concerned that the section 515 reserve accounts often are not adequate to fund needed rehabilitation. RHS and industry representatives agree that the overriding issue for section 515 properties is how to deal with the long-term needs of an aging portfolio. About 70 percent of the portfolio is more than 15 years old and in need of repair. Since 1999, RHS has allocated about $55 million in rehabilitation funds annually, but owners’ requests for funds to meet safety and sanitary standards alone have totaled $130 million or more for each of the past few years. RHS headquarters has encouraged its state offices to support individual property owners interested in undertaking capital needs assessments and has amended loan agreements to increase their rental assistance payments as necessary to cover the future capital and rehabilitation needs identified in the assessments. However, with varying emphasis by RHS state offices and limited rental assistance funding targeted for rehabilitation, the assessments have proceeded on an ad hoc basis. As a result, RHS cannot be sure that it is spending these funds as cost-effectively as possible. To better ensure that limited funds are being spent as cost-effectively as possible, we recommended that USDA undertake a comprehensive assessment of the section 515 portfolio’s long-term capital and rehabilitation needs, use the results of the assessment to set priorities for the portfolio’s immediate rehabilitation needs, and develop an estimate for Congress on the amount and types of funding required to deal with the portfolio’s long-term rehabilitation needs. USDA agreed with the recommendation and requested $2 million in the President’s 2003 budget to conduct a comprehensive study. RHS staff drafted a request for proposal that would have contracted out the study, but the Undersecretary for Rural Development chose to lead the study himself. Plans are to develop an inspection and rehabilitation protocol by February 2004 on the basis of an evaluation of a sample of properties. Finally, I would like to mention some work we have begun on the Section 521 rental assistance program. With an annual budget of over $700 million, the rental assistance program is the largest line item appropriation to the Rural Housing Service. This is a property-based subsidy that provides additional support to units created through the Section 515 multifamily and farm labor housing programs. RHS provides this subsidy to property owners through 5-year contracts. The objectives for our current work are to review (1) how RHS estimates the current and future funding needs of its Section 521 rental assistance program; (2) how RHS allocates the rental assistance; and (3) what internal controls RHS has established to monitor the administration of the rental assistance program. We anticipate releasing a report on our findings in February of 2004. Mr. Chairman, this concludes my prepared remarks. I would be pleased to answer any questions you or any other members of the Committee may have. For questions regarding this testimony, please contact William B. Shear on (202) 512-4325 or at shearw@gao.gov, or Andy Finkel on (202) 512-6765 or at finkela@gao.gov. Individuals making key contributions to this testimony included Emily Chalmers, Rafe Ellison, and Katherine Trimble. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Federal housing assistance in rural America dates back to the 1930s, when most rural residents worked on farms. Without electricity, telephone service, or good roads connecting residents to population centers, residents were comparatively isolated and their access to credit was generally poor. These conditions led Congress to authorize separate housing assistance for rural residents, to be administered by USDA. Over time, the quality of the housing stock has improved and credit has become more readily available in rural areas. Also, advances in transportation, computer technology, and telecommunications have diminished many of the distinctions between rural and urban areas. These changes call into question whether rural housing programs still need to be maintained separately from urban housing programs, and whether RHS is adapting to change and managing its resources as efficiently as possible. Our testimony is based on two reports--the September 2000 report on rural housing options and May 2002 report on multifamily project prepayment and rehabilitation issues. GAO found that while RHS has helped many rural Americans achieve homeownership and has improved the rural rental housing stock, it has been slow to adapt to changes in the rural housing environment. Also, RHS has failed to adopt the tools that could help it manage its housing portfolio more efficiently. Specifically, dramatic changes in the rural housing environment since rural housing programs were first created raise questions as to whether separately operated rural housing programs are still the best way to ensure the availability of decent, affordable rural housing. Overlap in products and services offered by RHS, HUD, and other agencies has created opportunities for merging the best features of each. Even without merging RHS's programs with HUD's or those of other agencies, RHS could increase its productivity and lower its overall costs by centralizing its rural delivery structure. RHS does not have a mechanism to prioritize the long-term rehabilitation needs of its multifamily housing portfolio. As a result, RHS cannot be sure it is spending limited rehabilitation funds as effectively as possible and cannot tell Congress how much funding it will need in the future.
In August 2015, we reported that USCIS had identified fraud and national security risks in the EB-5 Program in various assessments it conducted over time and in collaboration with its interagency partners. For example, in 2012, USCIS met with interagency partners and National Security Staff to assess fraud and national security risks in the EB-5 Program. An internal memo discussing this effort also highlighted steps to enhance the program’s ability to mitigate fraud such as through improved collaboration with the SEC and the FBI. Further, later in 2012, USCIS worked with FBI and the Department of Treasury Financial Crimes Enforcement Network, among others, to assess the benefits of incorporating enhanced security screenings to improve its vetting of EB-5 Program petitioners, including the need to provide dedicated fraud personnel to the EB-5 Program, according to FDNS personnel. Most recently, in early 2015, DHS’s Office of Intelligence and Analysis prepared a classified report, which updated the program’s 2012 assessment of the fraud risks to the EB-5 Program. USCIS officials said that they also identify potential fraud risks in the EB-5 Program through their day-to-day oversight work, and that law enforcement agencies such as HSI, the SEC, and the FBI may also uncover fraud through their own investigative efforts and may share the information with USCIS, as appropriate. Although the risk assessments conducted by USCIS and other agencies have helped provide information to USCIS to better understand and manage risks to the EB-5 Program, these assessments were onetime exercises, and, as we reported in August 2015, USCIS did not have documented plans to conduct regular future risk assessments of the program because, according to USCIS officials, the agency would perform them on an “as needed” basis. However, FDNS officials noted that fraud risks and schemes in the EB-5 Program were constantly evolving, and stated that the office regularly identifies new fraud schemes and that they must work to stay on top of emerging issues. We also reported that the EB-5 program has grown substantially over time—the total number of EB-5 visas issued increased from almost 3,000 in fiscal year 2011 to over 9,000 in fiscal year 2014, according to State data, which creates additional opportunities for fraud. According to the risk assessments and FDNS officials, the EB-5 Program possesses several risks that are generally not present in other types of immigration programs. Specifically, a senior FDNS official noted that, as is the case with other immigration benefits, EB-5 adjudications center on the eligibility of the petitioner or applicant, however, the EB-5 Program also has an investment component that creates increased program complexity and the potential for fraud risks. Fraud risks which USCIS and other agencies have identified for the EB-5 Program included those related to both the investors and regional centers, such as the following. Uncertain source of immigrant investor funds. USCIS’s 2012 risk assessment identified the source of EB-5 petitioner funds as an area at risk for fraud. As previously discussed, to be eligible for the EB-5 Program, immigrant investors must invest a minimum of $1 million—or $500,000 in a targeted employment area—in a job-creating enterprise, and investors must provide documentation showing that these funds come from a lawful source. USCIS officials said that some petitioners may have strong incentives to report inaccurate information about the sources of their funds on their petitions or use fraudulent documents in instances when the funds come from illicit—and thus ineligible—sources, such as funds obtained through drug trade, human trafficking, or other criminal activities. USCIS and State officials noted that verifying a lawful source of funds was difficult as they did not have authority to access and verify banking information with many foreign countries, and USCIS officials said that therefore IPO and FDNS did not have a means to verify self-reported immigrant financial information stated to come from these foreign banks. Legitimacy of investment entity. The amount of investment required to participate in the EB-5 Program, coupled with the fact that investors are making an investment in order to obtain an immigration benefit (i.e., green card), can create fraud risks tied to regional center operators and intermediaries. For example, SEC officials noted that immigrant investors may be vulnerable to fraud schemes because they may be primarily focused on obtaining their visas. As of May 2015, FDNS documentation tracking investigations by program stakeholders such as the SEC and HSI showed that over half (35) of the 59 open investigations were primarily focused on securities fraud. Moreover, in January 2016, the SEC’s Office of Inspections and Examinations identified the EB-5 Program in its examination priorities for 2016. Given these identified fraud risks, and the constantly evolving nature of risks to the program, we recommended in our August 2015 report that USCIS plan and conduct regular fraud risk assessments of the EB-5 Program to better position it to identify, address, and mitigate emerging fraud risks to the program. DHS concurred, stating that the EB-5 Branch of USCIS’s FDNS would continue to conduct a minimum of one fraud, national security, or intelligence assessment on an aspect of the program annually. In February 2016, USCIS officials stated that they had completed the data collection for their first review, which they estimated completing by September 2016. This review will focus on all identified national security concern cases initiated in the Fraud Detection and National Security Detection System from fiscal years 2011 through 2015. They also provided draft policy documents demonstrating their intention to require a minimum of one fraud assessment annually; however, these documents had not yet been finalized. To fully address the intent of our recommendation, USCIS needs to conduct at least one review, as planned, and document plans for future assessments. In August 2015, we reported that USCIS had taken some steps to enhance its fraud risk management efforts. These included establishing a dedicated entity to design and oversee its fraud risk management activities, creating an organizational structure conducive to fraud risk management, conducting fraud-awareness training, and establishing collaborative relationships with external stakeholders, including law enforcement agencies. In November 2013, USCIS established a fraud specialist unit for the EB-5 Program within FDNS. As of May 2015, FDNS was in the process of hiring an additional 8 dedicated staff with specialized fraud expertise to enhance its EB-5 Program fraud detection capabilities and oversight, bringing the total FDNS EB-5 Program staff to 21. According to FDNS officials, as of January 2016, the FDNS EB-5 Division included 22 full-time equivalent staff, of which 18 positions were currently occupied. We further reported in August 2015 that in 2013 USCIS also colocated staff who screen and adjudicate EB-5 petitions within IPO and began having FDNS officers and intelligence professionals work alongside EB-5 Program adjudicators to facilitate fraud-related information sharing. FDNS established training opportunities to include specialized fraud training at the Federal Law Enforcement Training Center related to money laundering and an internal “EB-5 University” to provide staff with monthly presentations on specific fraud-related topics believed to be immediately relevant to EB-5 Program adjudication. According to SEC, ICE, FBI, and USCIS officials, USCIS also increased its level of coordination with law enforcement agencies to cross-train staff with additional expertise and increase communication and collaboration on investigations and enforcement actions that can be taken when potential fraud, criminal activity, or national security threats are detected in the EB-5 Program. However, in our August 2015 report we also reported that USCIS faced significant challenges in its efforts to detect and mitigate fraud risks. Specifically, we found that USCIS’s information systems and processes limit its ability to collect and use data on the EB-5 Program to identify fraud related to individual investors or investments or to determine any fraud risk trends across the program. USCIS relies heavily on paper- based documentation. While USCIS personnel are to enter certain information from these paper documents into various electronic databases, these databases have limitations that reduce their usefulness for conducting fraud-mitigating activities. For example, information that could be useful in identifying program participants linked to potential fraud is not required to be entered into USCIS’s database, such as the applicant’s name, address, and date of birth on the Form I-924 used to apply for regional center participation in the EB-5 Program. USCIS officials stated that the agency will be able to collect and maintain more complete data on EB-5 Program petitioners and applicants through the deployment of electronic forms in its new system, the Electronic Immigration System (ELIS). However, USCIS has faced long-standing challenges in implementing ELIS, which, as we reported in May 2015, was nearly 4 years delayed and $1 billion over budget. As we reported in August 2015, USCIS has taken alternative steps to gather information to mitigate fraud risk while improvements to its information systems are delayed, such as expanding its site visits program to include random checks of the operation of EB-5 Program projects. However, opportunities remain to expand information collection through interviews with immigrant investors and expanded EB-5 Program petition and application forms. USCIS is statutorily required to conduct interviews of immigrant investors within 90 days after they submit the Form I-829 petition to remove conditions on their permanent residency. However, USCIS also has the statutory authority to waive the requirement for such interviews. As of April 2015, USCIS officials stated that USCIS IPO had not conducted an interview at the I-829 stage. We reported that conducting interviews at this stage to gather additional corroborating or contextual information could help establish whether an immigrant investor is a victim of or complicit in fraud—a concern shared by both ICE HSI and SEC officials, who noted that gathering additional information and context about individual investors could help to inform investigative work. USCIS officials said they anticipate conducting these interviews in the near future, but had not developed plans or a strategy for conducting interviews at this stage primarily because IPO was relatively new and began adjudicating I-829 petitions in September 2014. In August 2015, we also reported that USCIS does not collect certain applicant information that could help mitigate fraud. Specifically, USCIS does not require information on the Form I-924 about the businesses supported by the regional center and program investments coordinated by the regional center, such as the names of principals or key officers associated with the underlying businesses, or information on advisers to investors such as foreign brokers, marketers, attorneys, and other advisers receiving fees from investors. According to USCIS officials, at the time of our August 2015 report, USCIS was drafting revised Forms I-924 and I-924A that would seek to address many of these concerns. However, as these revisions have not been completed, it is too early to tell the extent to which they will position USCIS to collect additional applicant information. Given that information system improvements with the potential to expand USCIS’s fraud mitigation efforts will not take effect until 2017 at the earliest and that gaps exist in USCIS’s other information collection efforts, we concluded that collecting additional information would better position USCIS to identify and mitigate potential fraud. We recommended that USCIS develop a strategy to expand information collection, including considering the increased use of interviews at the I-829 phase as well as requiring the additional reporting of information in applicant and petitioner forms. DHS concurred and, as of February 2016, officials reported that USCIS continues to take steps to develop and implement a strategy to expand information collection that includes revisions to the Form I-924, I-924A, I-526, and I-829 applications and petitions to capture more information. In addition, these officials stated that USCIS had not yet conducted an interview at the I-829 stage, but they were finalizing an interview process and planned to begin conducting interviews in the third quarter of fiscal year 2016. We reported in August 2015 that USCIS had taken action to increase its capacity to verify job creation in response to past GAO and DHS OIG reports that found that USCIS did not have staff with the expertise to verify job creation estimates and that the agency’s methodologies for verifying such estimates were not rigorous. In particular, in December 2013, the DHS OIG reported that USCIS lacked meaningful economic expertise to conduct independent and thorough reviews of economic models used by investors to estimate indirect job creation for regional center projects, and recommended that USCIS coordinate with other federal agencies to provide expertise in the adjudication process. USCIS took action over time to increase the size and expertise of its workforce, provide clarifying guidance and training, and revise its process for assigning applications for adjudication. For example, in fiscal year 2013, USCIS increased its staffing from 9 adjudicators to 58, including 22 economists, and issued a policy memorandum clarifying existing guidance to help ensure consistency in the adjudication of petitions and to provide greater transparency for the EB-5 Program stakeholder community, according to IPO officials. In addition, USCIS improved its training curriculum to better ensure consistency and compliance with applicable statutes, regulations, and agency policy, including an update in 2014 of the new employee EB-5 training program and the establishment of an ongoing training focusing on recurring issues and petition cases that are novel in nature. Further, as we reported in August 2015, USCIS provided its economists with access to data from the Regional Input-Output Modeling System (RIMS II) economic model in fiscal year 2013 that increased their capacity to verify job creation estimates reported by investors for investments in regional center projects. IPO program managers estimated that as of fiscal year 2015, about 95 percent of EB-5 Program participants used economic models to estimate job creation, with about 90 percent of those investors using RIMS II. The RIMS II model is widely used across the public and private sectors and is considered to be among those valid to verify estimates of indirect and induced jobs reported for investments in regional center projects, according to USCIS and Department of Commerce (Commerce) economists, as well as industry and academic experts. Indirect jobs include jobs that are not directly created by a regional center business, but may result from increased employment in other businesses that supply goods and services to the regional center business as well as induced jobs created from workers’ spending of increased earnings on consumer goods and services. However, we also reported in August 2015 that the use of RIMS II data alone does not provide USCIS with the capacity to determine the location of jobs created, such as the number of jobs created in targeted employment areas that most immigrant investors use to qualify for a lower investment amount. USCIS’s May 2013 policy memorandum notes that Congress expressly provided for a reduced investment amount in a rural area or an area of high unemployment in order to spur immigrants to invest in new commercial enterprises that are principally doing business in—and creating jobs in—areas of greatest need. IPO program managers stated that approximately 90 percent of immigrant investors qualify for a reduced investment amount—$500,000 instead of $1 million—to participate in the EB-5 Program because they are claiming investment in a commercial enterprise which will create employment in a targeted employment area. The remaining 10 percent of immigrant investors pay twice that amount to participate in projects that are not limited to these locations. The IPO Economics Division Chief said that USCIS has not identified a need to verify the creation of jobs in a targeted employment area because the law permits regional center investors to use input-output models that do not have this capacity and program regulation and policy require that the investment capital be made available to the job creating entity which is principally doing business in the targeted employment area. IPO economists we interviewed also said that given the relative ease of proving job creation through economic modeling compared with documentation requirements to prove creation of direct jobs, immigrant investors generally claim indirect jobs, rather than direct jobs, to qualify for the program. In August 2015, we reported that USCIS’s methodology for reporting EB-5 Program outcomes and economic benefits was not valid and reliable because it is based on the minimum program requirements for job creation and investment. To estimate job creation, USCIS multiplies the number of immigrant investors who have successfully completed the program with an approved Form I-829, by 10—the minimum job creation requirement per investor. To estimate overall investment in the economy, the agency multiplies the number of immigrant investors approved to participate in the program with an approved Form I-526, by $500,000— the minimum investment amount, assuming all investments were made for projects in a targeted employment area. Accordingly, USCIS reported that from program inception in fiscal year 1990 through fiscal year 2014, the EB-5 Program has created a minimum of 73,730 jobs and more than $11.2 billion in investments. Our review and past GAO and DHS OIG audits of the program have pointed out the limitations of this methodology to report reliable program outcomes in that the data can be understated or overstated in certain circumstances. For example, USCIS officials stated that 90 percent of immigrant investors reported creating more than the 10-job minimum, and 10 percent of immigrant investors pay $1 million instead of $500,000 because they invest in projects outside of a targeted employment area. Estimating economic outcomes using the minimum program requirements in these circumstances would lead to an underestimate of the program’s benefits. For example we reviewed one project with about 450 immigrant investors that created over 10,500 jobs, or about 23 jobs per investor, while USCIS counted only the 10-job minimum per investor, a total difference of 6,000 jobs. Additionally, according to DHS’s 2013 Immigration Statistics Yearbook, about 32 investors paid $1 million instead of $500,000 into the program in fiscal year 2013, a total difference of $16 million not counted by USCIS. Conversely, USCIS’s methodology may overstate some economic benefits derived from the EB-5 Program. For example, the methodology assumes that all investors approved for the program will invest the required amount of funds, and that these funds will be fully spent on the project. According to IPO officials and our analysis of EB-5 Program data, there are far fewer investors who successfully complete the program than were approved for program participation, and the actual amount invested and spent in these circumstances is unknown. For example, our analysis showed that approximately 26 percent of all EB-5 investors who entered the program from its inception year through fiscal year 2011 may not have completed the process to show funds spent and jobs created with an approved I-829 as of the fiscal year ending in 2014. As we reported in August 2015, USCIS collects more complete information on EB-5 Program forms, but does not track or analyze this information to more accurately report program outcomes. Specifically, immigrant investors are required to report (and USCIS staff are to verify) the amount of their initial investment on the Form I-526, and to report the number of new jobs created by their investment on the Form I-829. However, USCIS officials said that they reported EB-5 Program outcomes using minimum program requirements because these are the required economic benefits stated in law, and that they are not statutorily required to develop a more comprehensive assessment of program benefits. We concluded that tracking and using more comprehensive information it collects on project investments and job creation on the Forms I-526 and I-829 submitted by immigrant investors and verified by USCIS would enable USCIS to more reliably report on EB-5 Program outcomes and economic benefits. We therefore recommended in our August 2015 report that USCIS track and report data that immigrant investors report, and the agency verifies on its program forms for total investments and jobs created through the EB-5 Program. DHS concurred and, as of February 2016, officials anticipated developing a data system that will enable USCIS to track and report data immigrant investors report in fiscal year 2017. We reported in August 2015 that USCIS had commissioned the Economics and Statistics Administration (ESA) of the Department of Commerce (Commerce) to conduct a study of the economic impact of the EB-5 Program. USCIS undertook this action in response to a December 2013 DHS OIG recommendation that USCIS conduct a comprehensive review of the EB-5 Program to demonstrate how investor funds have stimulated the U.S. economy. As of June 2015, USCIS and ESA had not yet finalized the methodology for the new study; however, ESA and USCIS approved a statement of work in November 2014 that outlined a preliminary methodology and study steps that would address some, but not all, shortcomings of prior studies of the overall EB-5 Program benefits. We reported that ESA officials planned to finalize the study methodology once they completed a review of the program data submitted by IPO, and to issue a final report in November 2015. However, the study was not intended to address the program’s costs, which are important for assessing a program’s net economic impact. Both USCIS and ESA officials confirmed the study would be an economic valuation which, unlike an evaluation, considers only the benefits of economic activity, and does not assess the program costs. USCIS officials said the decision was made not to assess the program costs due to associated challenges and because the information may not justify the investment. Our review of the draft methodology, however, showed some potential to include some cost information. Specifically, ESA officials said that after consulting with USCIS officials, they planned to collect information related to the permanent residence of the immigrant investors and their dependents to estimate the value of household spending. IPO officials said that ESA may also collect information that may help to estimate or disclose some of the costs associated with the program. To help provide Congress and other stakeholders with more comprehensive information on the overall economic benefits of the program, we recommended in our August 2015 report that USCIS include a discussion of the types and reasons any relevant program costs were excluded from the Commerce study. DHS concurred and said that USCIS IPO would recommend to Commerce that a description of potential costs not assessed as a part of the study be included when the study is published. In February 2016, USCIS officials stated that the study had not yet been published and estimated it would be completed by May 2016. Chairman Goodlatte, Ranking Member Conyers, and members of the committee, this completes my prepared statement. I would be happy to respond to any questions you or members of the committee may have. For questions about this statement, please contact Rebecca Gambler at (202) 512-8777 or gamblerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement included Seto Bagdoyan, Director; Cindy Ayers; Krista Mantsch; Taylor Matheson; Jan Montgomery; Jon Najmi; Edith Sohna; and Nick Weeks. Other contributors to the report on which this statement is based are listed in the report. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Congress created the EB-5 visa category to promote job creation by immigrant investors in exchange for visas providing lawful permanent residency. Participants are required to invest $1 million in a business that is to create at least 10 jobs—or $500,000 for businesses located in an area that is rural or has experienced unemployment of at least 150 percent of the national average rate. Upon meeting program requirements, immigrant investors are eligible for conditional status to live and work in the United States and can apply to remove the conditions for lawful permanent residency after 2 years. This statement discusses USCIS efforts under the EB-5 Program to (1) work with interagency partners to assess fraud and other related risks and address identified fraud risks, and (2) increase its capacity to verify job creation and use a valid and reliable methodology to report economic benefits. This statement is based on a report GAO issued in August 2015 ( GAO-15-696 ), with selected updates conducted in February 2016 to obtain information from DHS on actions it has taken to address the report's recommendations. In August 2015, GAO reported that the Department of Homeland Security's (DHS) U.S. Citizenship and Immigration Services (USCIS), which administers the Employment-Based Fifth Preference Immigrant Investor Program (EB-5 Program), had collaborated with its interagency partners to assess fraud and national security risks in the program in fiscal years 2012 and 2015. These assessments were onetime efforts; however, USCIS officials noted that fraud risks in the EB-5 Program are constantly evolving, and they continually identify new fraud schemes. USCIS did not have documented plans to conduct regular future risk assessments which could help inform efforts to identify and address evolving program risks. GAO recommended that USCIS plan and conduct regular future fraud risks assessments. DHS agreed, and as of February 2016, USCIS officials stated that they planned to complete an additional risk assessment by September 2016 and a minimum of one annually thereafter. GAO also reported in August 2015 that USCIS had taken steps to address the fraud risks it identified by enhancing its fraud risk management efforts; however, USCIS's information systems and processes limited its ability to collect and use data on EB-5 Program participants to address fraud risks in the program. For example, USCIS did not consistently enter some information it collected on participants in its information systems, such as name and date of birth, and this presented barriers to conducting basic electronic searches that could be analyzed for potential fraud. USCIS planned to collect and maintain more complete data in its new information system; however, the information system improvements with the potential to expand USCIS's fraud mitigation efforts were not to take effect until 2017 at the earliest. Given this time frame and gaps in USCIS's other information collection efforts, GAO recommended that USCIS develop a strategy to expand information collection in order to better position the agency to identify and mitigate potential fraud. DHS concurred, and in February 2016 USCIS officials stated that USCIS plans to develop such a strategy by the end of fiscal year 2016. In August 2015, GAO reported that USCIS had increased its capacity to verify job creation by increasing the size and expertise of its workforce, among other actions. However, USCIS's methodology for reporting program outcomes and overall economic benefits was not valid and reliable because it may understate or overstate program benefits in certain instances as it is based on the minimum program requirements of 10 jobs and a $500,000 investment per investor instead of the number of jobs and investment amounts collected by USCIS on individual EB-5 Program forms. For example, total investment amounts are not adjusted downward to account for investors who do not complete the program or upward for investments of $1 million instead of $500,000. USCIS officials said they are not statutorily required to develop a more comprehensive assessment. However, tracking and analyzing data on jobs and investments reported on program forms would better position USCIS to more reliably assess and report on the EB-5 Program economic benefits. Accordingly, GAO recommended that USCIS track and report data that investors report and the agency verifies on its program forms for total investments and jobs created through the EB-5 Program. DHS agreed and plans to implement this recommendation by the end of fiscal year 2017. In its August 2015 report, GAO recommended that USCIS, among other things, conduct regular future risk assessments, develop a strategy to expand information collection, and analyze data collected on program forms to reliably report on economic benefits. DHS concurred with the recommendations and reported actions underway to address them.
The BEA program’s goals are to encourage banks to increase their investments in CDFIs and lending and other financial services in distressed communities. Unlike grant programs, which are usually prospective—meaning they award applicants based on their plans for the future—the BEA program is retrospective, awarding applicants for activities they have already completed. Under the program’s authorizing statute, BEA award recipients are not limited in how they may use their award and, therefore, may use their award proceeds in any manner they deem fit. To encourage increased investment and lending, the BEA program awards applicants on the basis of their increased activities from one year (known as the baseline year) to the next (the assessment year). For example, for the fiscal year 2005 round of awards, calendar year 2003 was the baseline year and calendar year 2004 was the assessment year. When applying for awards, applicants may submit an application for any of the following three award categories: (1) CDFI-related activities, (2) distressed community financing activities, and (3) service activities. CDFI-related activities are primarily investments in CDFIs, such as equity investments (including grants and equitylike loans), loans, and insured deposits. Distressed community financing activities are primarily loans, such as affordable housing loans, small-business loans, commercial real estate loans, and education loans. Service activities include the provision of financial services such as check-cashing or money order services, electronic transfer accounts, and individual development accounts. Pursuant to statutory and regulatory requirements, BEA awards are percentage matches of an applicant’s reported increase in activities; that is, banks qualify for a BEA award equal to the sum of the percentage increase in the three program areas. For equity investments in CDFIs, the percentage match for both community development banks and traditional banks is the same—15 percent (see table 1). However, community development banks are eligible to receive awards three times higher than traditional banks for increasing CDFI support activities (e.g., increasing insured deposits in other CDFIs) or increasing their lending and service delivery in distressed communities. For distressed community financing activities, a priority factor of 3.0 or 2.0 is assigned to each type of eligible loan a BEA applicant originates—for example, a small-business loan is assigned 3.0 and an affordable housing development loan is assigned 2.0. The change in award-eligible activity (i.e., the increase in lending from the baseline to the assessment year) is multiplied by the applicable priority factor, and the result (or weighted value) is then multiplied by the applicable award percentage, yielding the award amount for that particular activity. To illustrate how the BEA program works, suppose a community development bank that did not have any investments in other CDFIs or loans in eligible distressed communities during the baseline year. During the assessment year, the bank makes the following investments or loans in CDFIs: $300,000 in insured deposits in three community development credit unions (three insured certificates of deposits of $100,000 each), $500,000 in small-business loans, and $1 million in affordable housing development loans in distressed communities (total increased investments and loans of $1.8 million). Under this example, the bank would be eligible for a BEA award totaling $369,000 (a 20.5 percent return on investment). Under the same scenario, a traditional bank would be eligible for a BEA award of $123,000 (or a return on investment of 6.8 percent). According to Treasury officials, the BEA program is seasonal and employs the equivalent of about six staff annually, who work on the program on an as-needed basis. A program manager oversees the BEA program on a day- to-day basis. During the program’s peak application season, Treasury reassigns roughly 10 staff members from other job responsibilities to review BEA applications over a period of approximately 10 business days. During fiscal year 2005, it cost approximately $1.2 million to administer the BEA program. These costs are composed of personnel compensation, information technology, and administrative contracting services, among other costs. CRA requires federal bank regulators to assess how well the banks they regulate meet the credit needs of all areas of the community they serve, including low- and moderate-income areas (insofar as is consistent with safe and sound operations) and to take this performance into account when considering a bank’s request for regulatory approval of a regulated action, such as opening a new branch or acquiring or merging with another bank. Federal regulators conduct examinations for compliance with CRA requirements on a frequency that varies depending on an institution’s size and prior rating. When conducting examinations, regulators check to see whether a bank’s CRA compliance activities are an ongoing part of the bank’s business and generally apply three tests to make this determination: A lending test evaluates the number, amount, and income and geographic distribution of a bank’s mortgage, small business, small farm, and consumer loans. An investment test evaluates a bank’s community development investments, including its investments in CDFIs. A service test evaluates a bank’s retail service delivery operations, such as branches and low-cost checking services. Upon completing examinations, regulators assign one of four ratings to a bank: outstanding, satisfactory, needs improvement, or substantial noncompliance. Treasury officials and some BEA award recipients we interviewed said that the BEA program provides banks with incentives to increase their investments in CDFIs and lending in distressed communities. However, determining the program’s impact is difficult because other economic and regulatory incentives also encourage banks to undertake award-eligible activities. Although it is difficult to determine the BEA program’s impact, the available evidence we reviewed suggests that the program’s impact has likely not been significant. For example, for large banks, a BEA award (when compared with total bank assets) is small and likely not large enough to have much influence on such banks’ overall investment and lending decisions. Other evidence also indicates that the BEA program’s impact has likely not been significant. In particular, until 2003, BEA awards may have provided certain community development banks with incentives to benefit financially from activities that were inconsistent with BEA program goals, and available studies indicate that certain CDFIs have been able to raise an increased amount of capital from banks, while BEA program funding and participation have declined. According to Treasury officials and some award recipients, the BEA program allows award recipients to increase their lending and investment levels beyond those that would occur without the program. Award recipients we interviewed stated that one of the program’s main benefits is reduced transaction costs. Transaction costs are primarily the time and expense associated with researching markets or borrower qualifications and underwriting loans within distressed communities. Award recipients stated that transaction costs are higher in distressed communities than in other communities because, for example, loans are typically smaller (thus generating less interest income) and have a higher risk of default. Because BEA awards are in cash, award recipients said that award proceeds can be used to provide more loans, on more favorable terms, than are otherwise possible. Award recipients said that such an arrangement benefits both BEA award recipients and loan borrowers. Another benefit that award recipients cited is the formation of partnerships between banks and other financial institutions, including CDFIs. When investing in a CDFI—the activity awarded with the highest payout—applicants identify and select a CDFI in which to invest, such as a community development bank, credit union, loan fund, or venture capital fund. According to officials from banks and CDFIs, the resulting investment in the CDFI produces two benefits. First, the investment increases the CDFI’s capacity by providing it with capital, often at below- market rates, which in turn allows the CDFI to provide more loans in distressed communities. Second, according to one CDFI official we interviewed, the partnership allows traditional banks to learn about and understand the work of CDFIs. For example, the CDFI official we interviewed noted that the partnership formed through the BEA program allowed officials from a traditional bank to sit on the CDFI’s board of directors, which exposed the traditional bank officials to the products and services of the CDFI. When initially established, Treasury intended the BEA program to encourage traditional banks to become involved in community development banking activities by, for example, investing in a CDFI or lending in a distressed community. A third benefit of the BEA program, according some award recipients we interviewed, is the provision of capital needed to help the community development banking industry grow and develop during its early years and sustain its level of operations today. An official representing the community development banking industry noted that there were only three Treasury-certified community development banks in the mid-1990s when the BEA program began, but today there are over 50 such banks, growth the official attributes to the BEA program. Some award recipients we interviewed also stated that award proceeds have allowed them to sustain their current level of operations within distressed communities, where, as previously noted, transaction costs are higher than in other areas. Accordingly, the BEA program is said to help community development banks remain true to their core missions of serving the financing and developmental needs of their community. Independently evaluating and isolating the BEA program’s impact on bank investment and lending decisions is difficult because other economic and regulatory incentives also affect bank behavior. In 1998, we reported that the prospect of receiving a BEA award, while one factor, was not always the primary reason banks undertook award-eligible activities. In 2000, the Federal Reserve Board completed a survey providing additional evidence that loan profitability can be an important factor in banks’ community development lending decisions. This survey, which focused on the performance and profitability of CRA-related lending, found that a majority of respondents’ community development loans were profitable. The survey also found that a majority of respondent’s CRA special lending programs, which target low-income borrowers and areas, were profitable. Because community development loans can be profitable, as noted in the Federal Reserve Board’s survey, banks have economic incentives to make these loans even without the incentive of potentially receiving a BEA award. In addition to economic incentives, regulatory incentives can also encourage banks to undertake award-eligible activities. In our 1998 report, we found that compliance with CRA was a major reason banks made investments in CDFIs and loans in distressed communities. CRA incentives may be particularly strong for banks that plan to open a new branch or merge with other banks because federal regulators may consider inadequate compliance when reviewing banks’ requests to merge with other banks or expand their operations. However, Treasury officials said that the BEA program provides banks with more targeted incentives than CRA requirements do. For example, the officials said that the BEA program provides banks with incentives to provide financial services in the most distressed communities—communities that banks are not required to service in their efforts to comply with CRA. To obtain feedback on the BEA program’s design and implementation, Treasury has conducted surveys of BEA program applicants. Treasury’s most recent survey, conducted in 2002, suggests that both the BEA program and CRA requirements are responsible for banks’ increased investments in CDFIs and lending in distressed communities. For example, the 2002 survey of 115 program applicants found that both the prospect of a BEA award and credit for CRA compliance motivated banks to undertake many CDFI-related activities, including providing CDFIs with loans, grants, and technical assistance, but found that the BEA program contributed toward the development of new financial products. The survey also found that, in many cases, neither the BEA program nor credit for CRA compliance motivated banks to lend in distressed communities. Rather, the banks reported making loans in distressed communities because such lending is part of their community development mission or part of their everyday business activities. Although it is difficult to determine the BEA program’s impact, the available evidence we reviewed suggests that the program’s impact has likely not been significant for large traditional banks, although it may allow for incremental increases in award-eligible activities. The available evidence also suggests that the BEA program may have provided some community development banks with incentives to benefit financially without furthering program goals. Further, available studies we reviewed indicate that some CDFIs have raised an increased amount of capital from banks while BEA program funding and participation have declined. Specifically, we found the following: For large traditional banks, as noted in our 1998 report, BEA awards are likely not large enough to provide a meaningful financial incentive. As shown in table 2, the size of a BEA award when compared with the assets of large traditional banks (those with over $1 billion in assets) was .0004 percent of assets in 2005. For these banks, the prospect of receiving a BEA award, independent of any economic and regulatory incentives the banks may have, is unlikely to serve as a significant financial incentive for increased CDFI investment or distressed community lending. However, BEA awards may provide large traditional banks with the capacity to incrementally increase their award-eligible activities, offset some of the cost associated with doing so, and increase the profits of related lines of business. Large traditional banks may also derive public and community relations value from receiving a BEA award that outweighs its financial benefit. Until 2003, many BEA program participants engaged in a now- prohibited practice called deposit swapping that improved their financial condition without necessarily furthering program goals. According to a Treasury official, beginning around 1998, a group of about 30 community development banks began to purchase insured certificates of deposit in one another—that is, swap deposits—to increase their CDFI investments and thereby receive BEA awards. At the time, Treasury provided a 33 percent award match for community development banks that increased their deposits in other community development banks. Following the 2003 prohibition, the percentage of total BEA dollars awarded for CDFI investments fell substantially—from 87 percent of all BEA dollars awarded in 2002 to only 18 percent in 2003 (by contrast, total BEA dollars awarded for increased lending and services in distressed communities increased from 13 percent in 2002 to 82 percent in 2003). According to a Treasury official, the prohibition on deposit swapping was, in fact, the primary reason for the substantial decline in CDFI investments. This decline suggests that, until 2003, banks may have been responding to financial incentives that were inconsistent with the BEA program’s goals, which include increasing lending within distressed communities. Community development loan funds have raised an increased amount of capital from banks, thrifts, and credit unions, while BEA program funding and bank participation in the program have declined. According to data from a consortium of CDFIs, community development loan funds—the most numerous type of CDFI and thus the largest group of potential BEA program beneficiaries—have continued raising capital from banks, thrifts, and credit unions concurrent with a decline in funding and bank participation in the BEA program. According to the consortium’s data, the percentage of capital loan funds raised from banks, thrifts, and credit unions increased from 47 percent in fiscal year 2003 to 56 percent in fiscal year 2004. As discussed previously, BEA program funding also declined substantially in recent years from over $46 million in fiscal year 2000 to about $10 million in fiscal year 2005. We note that one limitation of the consortium’s data for purposes of this analysis is that it includes credit unions, which are ineligible for BEA awards. However, an official involved with completing the studies said that loan funds raised most of the capital from banks and thrifts, which are eligible for BEA awards. According to the CDFI consortium, financial institutions are a growing source of capital for loan funds because loan funds provide a safe investment, allow banks to earn CRA credit, and are flexible partners. Treasury’s performance measures for the BEA program likely overstate its impact on bank investments in CDFIs and lending in distressed communities. In addition, we identified weaknesses in Treasury’s system of internal control for ensuring proper award payments. Specifically, we found that Treasury has limited controls in place to help ensure that bank applicants finance properties located in eligible distressed communities. We found that Treasury also provides limited guidance to its application review staff to identify potential errors in the reporting of a financed property’s location and does not require the reviewers to completely document their work. To assess the BEA program’s performance, Treasury publicly reports bank applicants’ total reported increase in CDFI investments and distressed community lending. To establish targets for this measure, Treasury assumes a complete, causal linkage between the BEA program and applicants’ increases in award-eligible activities. For example, in 2005, Treasury attributed a reported $100 million increase in award-eligible activities to BEA awards of approximately $10 million distributed that year. In reporting results for this measure, Treasury does not account for other factors that also affect bank lending and investment decisions, such as loan profitability and CRA compliance. By not accounting for such factors, Treasury’s performance measure likely overstates the BEA program’s impact. As a result, Treasury lacks accurate information needed to assess program accomplishments and make changes to ensure that the BEA program is meeting its goals. GAO’s standards for effective performance measures state that measures should be objective—that is, they should be reasonably free of any significant bias or manipulation that would distort an accurate assessment of performance. Treasury internally tracks other BEA program data, but these data also likely overstate the program’s impact. For example, as part of a BEA application, Treasury requests that applicants provide such data as the number of full-time equivalent jobs created or maintained and the number of housing units developed or rehabilitated in distressed communities. Treasury uses this information to monitor and measure the BEA program’s impact. Similar to its externally reported measure, Treasury assumes a direct one-to-one correlation between these outcomes (new jobs and housing units) and the BEA program. Treasury does not account for external factors, such as economic and regulatory incentives that could also contribute to an increase in jobs created or housing units developed. Further, these data are self-reported and, according to Treasury, not verified. Therefore, they could be subject to the type of bias and manipulation that would distort an accurate assessment of performance. We acknowledge that developing performance measures for the BEA program is challenging. As stated in our 1998 report, to an extent that neither we nor Treasury can quantify, banks are receiving awards for investments and loans they would have made without the prospect of receiving a BEA award. The available evidence discussed in this report (e.g., the relatively small size of BEA awards for large banks) further supports this analysis. While it may have been advisable for Treasury to attribute less influence to the BEA program when developing its performance measures, it is not clear that a reliable and appropriate methodology exists to accurately measure the BEA program’s impact on bank behavior. According to a Treasury official, one of the most significant risks the BEA program faces is that applicants may provide inaccurate information regarding the location of properties financed by their activities. That is, the potential exists for banks to receive BEA awards based on loans that finance properties, such as commercial or affordable housing development loans, that were not located in eligible distressed communities. While Treasury has established controls to mitigate this risk, these controls are not fully consistent with federal internal control standards, which state that policies and procedures, including appropriate documentation, should be designed to help ensure that management’s directives, such as verification procedures, are carried out and that appropriate supervisory oversight of established processes is exercised. Without sufficient controls to help ensure that properties are located in eligible distressed communities, the BEA program is vulnerable to making improper payments. According to a Treasury official, application review staff are to perform the following procedures to ensure that properties are located in eligible distressed communities: Use an online Treasury system, for all loans of $500,000 or more, to verify that borrower addressers or, in some cases, properties secured by the loans (collateral) are located in eligible census tracts (generally referred to as loan geocoding). Geocode a sample of loans valued at $250,000 to $500,000 to verify that borrower or collateral addresses are located in eligible census tracts. Treasury officials said that BEA program application review staff have identified properties that were not located in eligible distressed communities. For example, a Treasury official said that, in one case, the address of the borrower (a developer), which was located in an eligible distressed community, was given as a basis for the bank to receive a BEA award. However, the official said that the address of the property under development was not in an eligible distressed community. The official said that she was familiar with the area where the property was located and knew that it did not meet eligibility requirements, which prompted her to do follow-up analysis. According to the official, Treasury staff disallowed this particular loan as a basis for the bank to receive a BEA award. While a Treasury official said that the department has established controls to mitigate errors in the reporting of property locations, we identified limitations with the guidance that Treasury provides to its application review staff. For example, Treasury’s guidance states that for loans of $500,000 or above and for a sample of loans from $250,000 to $500,000, staff should geocode the borrower’s address. However, for development loans where the address of the borrower (such as a developer) may differ from the address of the property under development, the guidance does not specifically require staff to geocode the property address. A Treasury official confirmed that the department has not provided specific guidance to reviewers on geocoding property addresses in such instances. As noted previously, Treasury staff have identified at least one example in which the location of the borrower was in a distressed community but the location of the property was not, although this identification was largely because of the reviewer’s familiarity with the area where the property was located. By not specifying in the guidance that reviewers should geocode property addresses where appropriate, the potential exists that banks will receive BEA awards based on erroneous information. We reviewed two banks’ BEA applications for the fiscal year 2004 and 2005 rounds of BEA awards (a total of four applications) to conduct a limited test of Treasury’s implementation of procedures for verifying certain application data. Each bank in our review received the maximum $500,000 award in the 2005 funding round. The files we reviewed did not contain any documentation of the staff’s geocoding of property location data (for loans exceeding $250,000 or $500,000). A Treasury official we interviewed agreed that the files did not contain any documentation of the staffs’ geocoding effort. Further, our review of Treasury’s BEA application guidance found that the guidance does not establish specific documentation requirements for the program staff’s geocoding efforts. Without such guidance and documentation requirements, Treasury management and supervisors, as well as outside reviewers, cannot be assured that the geocoding is being conducted or that errors in the reporting of property location are detected. To assess the potential for improper BEA award payments, we used Treasury’s online geocoding system to determine the locations of properties contained in the 2004 and 2005 applications for the two banks. We identified 1 commercial and 5 affordable housing development loans among these applications, out of a total of 18 such loans with a value of $250,000 or more, where we had questions as to whether properties financed by the loans were located in eligible distressed communities. For example, we identified an affordable housing development loan of approximately $423,500 that was made to purchase an apartment building. Our geocoding analysis determined that the address of the property was not in an eligible distressed community, whereas the address of the borrower was in a distressed community that could qualify under certain circumstances. In this case, according to a Treasury official, the reviewer probably geocoded the address of the borrower rather than the address of the property. The Treasury official also suggested that the address of the property may have been in an eligible distressed community at the time the application was made in 2004. However, our analysis of census data indicates that the relevant census tract was not an eligible distressed community in 2004. Consequently, Treasury’s decision to provide a BEA award to this bank may have been based in part on erroneous information. Because of other economic and regulatory incentives that also affect bank behavior, it remains difficult to isolate and determine the BEA program’s impact on banks’ decisions to invest in CDFIs and lend in distressed communities. Treasury’s BEA program performance measures do not provide additional insights into the program’s impact because they assume that all reported increases in eligible investment and lending occur solely because of the program’s financial incentives. However, based on available evidence we reviewed, it is reasonable to conclude that the program likely does not provide significant financial incentives for large banks, due to the typical award’s relatively small size for such institutions. To an extent that is unquantifiable, a significant percentage of reported large bank increases in CDFI investments and distressed community loans each year would likely have occurred without the BEA program. Further, the program also appears to have provided certain community development banks with financial incentives and opportunities to benefit financially without furthering program goals. On the other hand, the BEA program may provide some banks, including large banks, with additional incentives and capacity to incrementally increase their award-eligible activities, offer public and community relations benefits to some award recipients, contribute to the development of new financial products, and help establish partnerships between banks and other CDFIs. Treasury’s internal controls to ensure proper award payments are insufficient. Treasury’s guidance to its BEA application review staff does not require them to geocode property addresses, even though evidence exists that applications may contain errors in reported information. The guidance also does not establish standards for documenting verification efforts. Consequently, the BEA program is vulnerable to making improper payments. To help ensure the integrity of the BEA award payment process, we recommend that the Secretary of the Treasury revise the guidance for reviewing program applications so that program staff are required to (1) geocode property addresses where appropriate and (2) document their efforts to verify property addresses. We provided a draft of this report to the Department of the Treasury for its review and comment. Treasury provided written comments that are reprinted (with annotations) in appendix II. In its comments, Treasury agreed with our conclusion that determining the extent to which the BEA program provides banks with incentives to increase their investments in CDFIs and lending in distressed communities remains difficult given the number of external factors that drive such decisions. However, Treasury stated that our report bases many of its conclusions on information that is overly general, outdated, or developed for other purposes and, as a result, does not reflect an accurate portrayal of the BEA program or its importance within the banking industry. Treasury also said that we did not adequately consider evidence the department provided regarding the BEA program’s impact. Treasury did agree to implement our recommendation that application review staff (1) geocode property addresses, where appropriate; and (2) document their efforts to verify property addresses. Further, Treasury stated that it will adopt a policy requiring applicants to report addresses for transactions; provide program staff with updated instructions to geocode all transactions over $250,000 (not just transactions over $500,000, as is the current practice); and initiate and implement steps to analyze a statistically significant sample of transactions less than $250,000. In its comments, Treasury stated that the focus of our report was inherently flawed. Treasury said our report did not assess, as it expected, whether the BEA program, as currently structured, is effective at motivating banks to undertake community development financing activities they would not normally undertake or, if the program were found to be ineffective, recommend changes to its structure. In fact, we did seek to assess whether the BEA program, as currently structured, is effective at motivating banks to undertake activities they would not normally undertake. However, as was the case when we initially evaluated the BEA program in 1998 and as we state in this report, because of other economic and regulatory incentives that affect bank behavior, it is difficult to isolate the BEA program’s impact from these other incentives. We note an absence of change in the banking industry since 1998 that would facilitate isolating the BEA program’s impact for this review. On the contrary, isolating the BEA program’s impact may be more difficult today than in 1998 because the average BEA award amount and number of banks participating in the program have declined significantly in recent years. Although isolating the impact of the BEA program is difficult, we believe available evidence suggests that its impact has likely not been significant. Treasury also stated that our report relied on inappropriate information and data to form conclusions and that we did not consider other evidence. For example, Treasury stated that none of the studies cited in the report— including our 1998 report, a 2000 Federal Reserve survey on CRA-related lending, and two studies by a consortium of CDFIs—is an explicit evaluation of the BEA program. Treasury also stated that we undertook only a limited review of current program participants. Contrary to Treasury’s assertions, our 1998 report includes an assessment of the BEA program. Moreover, the Federal Reserve survey and reports by a consortium of CDFIs address issues that we believe are critical to independently evaluating the BEA program’s effectiveness. In particular, the Federal Reserve survey indicates that community development lending can be profitable, which suggests that a variety of factors—including economic and regulatory factors—influence bank lending decisions. The variety of factors that can influence bank lending decisions increase the difficulties associated with isolating and determining the BEA program’s impact. As discussed in this report, the data from the consortium of CDFIs also provide evidence that community development loan funds have been able to raise an increased amount of capital from banks despite recent declines in BEA program funding and participation. Regarding our interviews with program participants, as we note in appendix I, we chose program participants for interviews based on a variety of characteristics— including differing bank asset sizes, frequency of program participation, status as a traditional bank or community development bank, and CDFI type—to elicit a wide range of views and perspectives on the BEA program. Further, Treasury stated that we did not adequately refer to its 2002 survey of BEA program participants in our draft report. Treasury stated that evidence from the survey clearly demonstrates that the BEA program plays a role in program applicant investment decisions. While we recognize that surveys of program beneficiaries can play an important role in program evaluations, we believe that their results must be interpreted with caution. For example, survey respondents who are program beneficiaries have a financial incentive to overstate a program’s impact. To compensate for this limitation, we sought to obtain and analyze independent evidence, including available studies, to assess the BEA program’s impact. Even so, the findings of Treasury’s 2002 survey are consistent with the findings of our report. For example, our report states that prior to 2003, when deposit swapping was prohibited, the BEA program may have provided certain community development banks with incentives to make investments that benefited them financially but were inconsistent with program goals. In Treasury’s 2002 survey, CDFI deposits was the only category in which a majority of bank respondents (52 percent) said that the BEA program was the primary reason they made an award-eligible investment. Overall, Treasury’s 2002 survey indicates that various factors, which include, but are not limited to, the prospect of receiving a BEA award, motivate banks’ decisions to invest in CDFIs and lend in distressed communities. In fact, Treasury’s 2002 survey found that in many cases, neither the BEA program nor credit for CRA compliance motivated banks’ decisions to lend in distressed communities. Rather, as we state in our report, the survey found that respondents undertook lending activities because they were part of their community development mission or part of their everyday business activities. Additionally, Treasury said that some conclusions in the report appear to reflect a lack of understanding of the BEA program and the banking industry. Specifically, Treasury stated the following: GAO’s analysis of the size of a BEA award relative to large banks’ total assets was overly general and did not consider that many banks (in particular large banks) carry out CDFI financing within specific lines of business, such as community development business lines. Rather than comparing a large bank’s BEA award amount with its total assets, as we did, Treasury said a more appropriate and meaningful analysis would have been to consider the bank’s BEA award to the assets of a particular business line or its relative importance in lowering the bank’s transaction costs. In response to this comment, we added language to the report that, for large traditional banks, BEA awards may provide additional capacity to incrementally increase award-eligible investments and lending, offset some of the costs associated with doing so, and increase the profits of related lines of business. In interviews for this report, officials from one large bank said BEA awards have allowed their bank to provide more loans than they would have in the program’s absence, and officials from another large bank said BEA awards have allowed their bank to provide loans on more favorable terms. However, the officials said that other factors, such as CRA compliance and loan profitability, also influence their community development lending decisions. Further, officials from both banks said their banks would continue community development lending in the BEA program’s absence, although officials from one bank said their bank would continue such lending to a lesser extent. Therefore, we continue to believe that the BEA program likely does not have a significant impact on large banks’ overall investment and lending decisions, although there may be an incremental impact. GAO’s discussion of the now-prohibited practice of deposit swapping was based on outdated information, as Treasury moved to prohibit this practice four years ago. Treasury said it did not understand why we chose to include a discussion of deposit swapping in a report on the BEA program’s current status. In response to this comment, we assert that our report sought to assess the BEA program’s impact on bank behavior over time, rather than at a single point in time. Thus, we believe that our discussion of deposit swapping, which focuses on bank behavior in response to incentives that the BEA program provided until 2003, is appropriate. We note that deposit swapping provides evidence that, until 2003, the BEA program’s impact in encouraging some banks to make productive investments and loans in distressed communities likely was not significant. We also note that funding for the BEA program, and bank participation in it, were highest prior to 2003 when Treasury prohibited deposit swapping, adding significance to the issue of deposit swapping and its connection to bank behavior. GAO’s report failed to mention other important program benefits. In support of this statement, Treasury cites its 2002 survey in which 19 percent of respondents indicated that the prospect of receiving a BEA award prompted them to launch innovate financial products, services, or educational programs to meet the needs of underserved households or communities. In response to this comment, we revised our report to reflect this survey finding. Treasury also stated that it would have been useful if our report studied the underlying data from the consortium of CDFIs to, among other things, determine the BEA program’s impact in initiating productive relationships between banks and CDFIs. Our draft report stated that a benefit of the BEA program is that it encourages partnerships between banks and CDFIs. However, it was not possible to determine from the CDFI consortium data we reviewed whether the loan funds cited in the reports formed partnerships with banks participating in the BEA program. For example, the consortium reports did not specifically identify the loan funds and banks that were surveyed for inclusion in the reports. Therefore, based on information in the reports, we were unable to conduct the types of analyses Treasury proposes in its comments. We are sending copies of this report to the Secretary of the Treasury and other interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-7215 or scottg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The objectives of this report were to (1) examine the extent to which the Bank Enterprise Award (BEA) program may have provided banks with financial incentives to increase their investments in community development financial institutions (CDFIs) and lending in distressed communities and (2) assess the BEA program’s performance measures and certain internal controls designed to ensure proper award payments. To address our first objective, we reviewed relevant documents and data, including BEA program statutes, regulations, memorandum, guidelines, and reports; GAO’s 1998 report on the CDFI Fund and BEA program; a 2000 Federal Reserve Board study on the performance and profitability of Community Reinvestment Act-related lending, and two studies by the CDFI Data Project, which is an industry consortium that gathers and reports financial data on the CDFI industry. We also interviewed three trade associations representing various segments of the CDFI industry to obtain their views on the BEA program. Further, we interviewed a nonprobability sample of nine BEA award recipients and five CDFI beneficiaries from the fiscal year 2005 round of BEA awards. We selected these award recipients and CDFI beneficiaries for interviews based on a range of characteristics, including differing bank asset sizes, frequency of program participation, status as a traditional bank or certified community development bank, and CDFI type. Our sample selection criteria was intended to obtain a diverse pool of respondents possessing a range of views and perspectives on the BEA program. To address our second objective, we interviewed Treasury officials to obtain information on the BEA program’s measures and internal controls. We compared the program’s performance measures to GAO’s standards for effective measures, as outlined in publications we have issued in connection with the Government Performance and Results Act. We also compared the BEA program’s internal controls to GAO’s Standards for Internal Control in the Federal Government. To further assess the program’s internal controls, we reviewed application documents for two banks that each received multiple BEA awards from 2000 through 2005 and used Treasury’s online geocoding system to determine the locations of properties contained in the 2004 and 2005 applications for the two banks. We also reviewed BEA program application review guidance. We conducted our work from October 2005 through July 2006 in Washington, D.C., in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of the Treasury’s letter dated July 21, 2006. 1. Our report includes a statement by Treasury officials that the BEA program provides banks with incentives to provide financial services in the most distressed communities—communities that banks are not required to service in their efforts to comply with CRA. However, as discussed in our report, measuring the purported impact of the BEA program is difficult. 2. Census tracts that qualify for the BEA program can exceed those specified in Treasury’s letter. For example, census tracts with poverty rates as low as 20 percent may qualify under certain circumstances. Therefore, the BEA program may not be as targeted as Treasury claims. 3. Our report does not address this issue. However, we note that requiring BEA award recipients to use their award proceeds for additional community development activities would pose complexities. For example, it would require Treasury to develop information about current award recipients’ overall community development activities and a mechanism for monitoring recipients’ use of award dollars. 4. Our report does not comment on the BEA program’s funding relative to other related programs within Treasury. We provide information on the program’s funding for descriptive purposes only and make no assertions concerning its priority within Treasury. In addition to the contact named above, Wesley Phillips (Assistant Director), Emilie Cassou, David Dornisch, Ronald Ito, Austin Kelly, Elizabeth Olivarez, David Pittman, Linda Rego, and James Vitarello made key contributions to this report.
Established in 1994, the Department of the Treasury's Bank Enterprise Award (BEA) program provides cash awards to banks that increase their investments in community development financial institutions (CDFI) and lending in economically distressed communities. CDFIs are specialized institutions that provide financial services to areas and populations underserved by conventional lenders and investors. In 2005, Treasury provided nearly $10 million in BEA awards. The BEA program has faced longstanding questions about its effectiveness and experienced significant declines in funding in recent years. This report (1) examines the extent to which the BEA program may have provided banks with financial incentives and (2) assesses the BEA program's performance measures and internal controls. To complete this study, GAO reviewed relevant award data; interviewed Treasury, bank, and CDFI officials; and assessed the BEA program's performance measures and internal controls against GAO's standards for effective measures and controls. The extent to which the BEA program may provide banks with incentives to increase their investments in CDFIs and lending in distressed communities is difficult to determine, but available evidence GAO reviewed suggests that the program's impact has likely not been significant. Award recipients GAO interviewed said that the BEA program lowers bank costs associated with investing in a CDFI or lending in a distressed community, allowing for increases in both types of activities. However, other economic and regulatory incentives also encourage banks to undertake award-eligible activities, and it is difficult to isolate and distinguish these incentives from those of a BEA award. For example, banks may have economic incentives to lend in distressed communities because of the potential profitability of such lending. Although it is difficult to determine the BEA program's impact, available evidence suggests that the impact likely has not been significant. For example, the size of a BEA award for large banks (which was .0004 percent of assets in 2005) suggests that a BEA award does not have much influence on such banks' overall investment and lending decisions. However, BEA awards may allow large banks to incrementally increase their award-eligible investments and lending. The BEA program's performance measures likely overstate its impact, and GAO identified weaknesses in certain program internal controls. To assess the BEA program's performance, Treasury, among other measures, annually aggregates the total reported increase in CDFI investments and distressed community loans by all applicants but does not account for other factors, such as economic and regulatory incentives that also affect bank decisions. GAO also found that Treasury has limited controls in place to help ensure that BEA program applications contain accurate information. In particular, Treasury provides limited guidance to application review staff to identify potential errors and does not require the reviewers to completely document their work. As a result, GAO found that the BEA program is vulnerable to making improper payments.
The U.S. passenger airline industry is principally composed of legacy, regional, and low-cost airlines. Legacy (sometimes called network) airlines support large, complex hub-and-spoke operations with thousands of employees and hundreds of aircraft (of various types), with flights to domestic communities of all sizes as well as to international destinations. Generally, regional airlines operate smaller aircraft than legacy airlines— turboprops or regional jets with up to 100 seats—and often operate flights marketed by a legacy airline. Low-cost airlines generally entered the marketplace after the U.S. airline industry was deregulated in 1978 and typically have a less extensive network and lower operating costs. Passengers access flights offered by these various airlines in the United States through hundreds of commercial-service airports. Primary airports are classified on the basis of passenger traffic as large, medium, small, and nonhub. Passenger traffic at these airports is highly concentrated: about 70 percent of passengers enplaned at the 29 largest airports and another 19 percent enplaned at the 36 next largest airports in 2009, the most recent year for which these data are available. Some of these largest airports also face significant congestion and delay issues. As we recently reported, seven large airports were the source of about 80 percent of departure delays captured in FAA’s Operations Network in 2009. The national airspace system in which these airlines and airports operate is a complex, interconnected, and interdependent network of systems, procedures, facilities, aircraft, airports, and people that must work together to ensure safe and efficient operations. FAA, DOT, airlines, and airports all affect the efficiency of national airspace system operations. In particular, DOT and FAA set policy and operating standards for aircraft and airports. As we previously reported, the capacity of the aviation system to meet the demand of aviation system users is both variable and subject to a number of interrelated factors. The capacity of the aviation system is affected not only by airports’ infrastructure, including runways and terminal gates, but also by weather conditions and air traffic control that can, at any given time, result in disruptions and variation in available airport and system capacity. For example, some airports have parallel runways that can be used simultaneously in good weather but are too close together for simultaneous operations in bad weather. In severe weather, airports can close, resulting in aircraft being grounded both at the closed airport and at other airports where aircraft cannot depart for the closed airport. The number of aircraft that can be safely accommodated in a given portion of airspace further affects capacity. If too many aircraft are trying to use the same airspace, some may be delayed on the ground or en route. For example, delays often occur in the New York City area because air traffic is so heavy, with three major airports located within 100 miles of each other. Airlines’ scheduling and business practices can also exacerbate airport congestion and delays. For instance, some airline business models rely on tight turnaround times between flights, which can increase the likelihood of delays for flights scheduled later in the day. Additionally, airlines sometimes schedule flights during certain periods to accommodate passenger demand without considering an airport’s available capacity. When flights are disrupted—whether caused by reductions in system capacity (such as during bad weather) or by internal factors (such as aircraft mechanical problems or crew shortages)—airlines make trade- offs between long delays and cancellations, though they generally try to avoid canceling flights. In doing so, they attempt to minimize disruptions to their network and passengers. For example, when bad weather reduces airport capacity and fewer flights can take off or land, airlines must decide how to ration their traffic. They can hold to their schedule, recognizing that some flights may experience long delays, or they can cancel some flights to avoid long delays for the remaining flights. How airlines manage such trade-offs depends on their business models and the circumstances of each situation. As we recently reported, flight delays and cancellations have declined since 2007, largely because airlines have scheduled fewer flights during the economic downturn. From 2007 through 2010, the portion of flights that were delayed—that is, arrived at least 15 minutes later than scheduled—or were canceled or diverted decreased by 6 percentage points, according to DOT data (see fig. 1). Indeed, cancellations rates also peaked in 2007 at 2.16 percent of all flights, before declining to 1.39 percent in 2009 and 1.76 percent in 2010. Nevertheless, as we previously reported, airports still experience and contribute substantial delays to the system. In recent decades the airline industry’s earnings have been extremely volatile. Despite some periods of strong growth and increased earnings, airlines have at times suffered such substantial financial distress that some have filed for bankruptcy. According to a recent FAA-sponsored research study, U.S. passenger airlines lost more than $60 billion from 2000 through 2008 on revenues of just more than $1 trillion. An inefficient air transportation system that contributes to flight delays and cancellations increases airline costs and reduces demand for air travel, compounding these financial challenges. Airline industry financial pressures have led airlines to change certain business practices in order to cut costs and enhance revenue. For example, airlines have adjusted their capacity to increase passenger load-factors (i.e., the proportion of available seats filled with passengers). As a result, a large number of cancellations by an airline cannot be absorbed easily into later flights and, increasingly, airlines will not rebook passengers on other airlines’ flights because of the costs involved. Passengers on canceled flights can then face long overall trip delays. In addition, for decades airlines have sought to reduce the revenue losses associated with passengers who do not show up for flights by accepting reservations for more passengers than they have seats. Because the number of no-shows is not entirely predictable, there is an element of risk in overbooking flights. If too many reservations are accepted and more passengers show up at departure time than the aircraft can carry, the airline must deal with the costs and customer service issues that arise when some customers are denied boarding. On the other hand, if the airline does not accept enough reservations for the flight and the number of no-shows is greater than expected, the airline loses revenue from empty seats that could otherwise have been occupied and some passengers are denied the opportunity to book their first-choice flight even though that flight could have accommodated them. DOT has long required airlines to solicit and compensate volunteers on oversold flights before anyone is bumped involuntarily and has also mandated financial compensation for passengers who are involuntarily denied boarding because their flights were oversold. Passenger complaints about delays, cancellations, and denied boardings, including complaints about being held in an aircraft for many hours while awaiting takeoff, have led Congress to consider stronger passenger protections. For instance, after hundreds of passengers were stuck in planes on snowbound Detroit runways for more than 8 hours in January 1999, both the House of Representatives and Senate conducted hearings on airlines’ treatment of air travelers and considered whether to enact a “passenger bill of rights.” The Air Transport Association and its member airlines maintained that they should have an opportunity to improve their customer service without legislation and executed an Airline Customer Service Commitment on June 17, 1999, in which each of the member airlines agreed to prepare a customer service plan. In 2000, AIR-21 mandated a review by the DOT Office of Inspector General (IG) of the extent to which each airline met all provisions of its customer service plan. In its 2001 report, the IG found that, overall, airlines were making progress toward meeting their plan provisions and that their efforts had been a plus for air travelers. However, the IG also reported “significant shortfalls in reliable and timely communication with passengers by the airlines about flight delays and cancellations.” Furthermore, the IG found that the airlines had not directly addressed the root cause of customer dissatisfaction—flight delays and cancellations—and had not indicated how they planned to remedy these problems in areas under their control. Other passenger rights bills were introduced in Congress in 2001, 2007, 2009, and 2011. These bills were also designed to establish and enhance airline passenger protections, and the 2007, 2009, and 2011 bills explicitly limited tarmac delays to 3 hours. However, the 2001, 2007, and 2009 bills were not enacted, and the 2011 bill has not yet been enacted during this Congress. In recent years, DOT has adopted rules to enhance passenger protections. First, in 2008, it amended its overbooking rule to increase the required compensation for involuntarily denied boarding, among other things. Second, in late 2009, after a lengthy rulemaking and a task force report on long tarmac delays, DOT issued its first “Enhancing Airline Passenger Protections” rule. The final rule, in effect since April 29, 2010, requires certain U.S. airlines to develop and implement a contingency plan for lengthy tarmac delays, including an assurance that, for domestic flights, the airline will not allow a tarmac delay to exceed 3 hours unless the pilot-in-command determines that there is a safety- related or security-related impediment to deplaning passengers, or that air traffic control has advised the pilot-in-command that deplaning would significantly disrupt airport operations. The airlines’ contingency plans must also include an assurance that adequate food and potable water will be provided no later than 2 hours after the aircraft leaves the gate (or touches down, in the case of an arrival), unless the pilot-in-command determines that safety or security considerations preclude such service. Failure to comply with these rules could be considered an unfair or deceptive practice and may subject the airline to enforcement action and a fine of up to $27,500 per violation. Furthermore, under the rule, the holding out—advertising or operating—of any chronically delayed flight is considered an unfair and deceptive practice and an unfair method of competition. The rule also requires a variety of other actions on the part of airlines to protect and better inform passengers. In April 2011, DOT issued its second “Enhancing Airline Passenger Protections” rule. This rule—which partially went into effect in August 2011 and will be fully implemented in January 2012—requires airlines, among other things, to reimburse passengers for baggage fees if their bags are lost, provide consumers with greater compensation for involuntarily denied boarding, and disclose all fees for optional services. The new rule also expands the existing tarmac delay rule to cover all U.S. large-, medium-, small-, and nonhub airports as well as foreign airlines’ operations at those U.S. airports, and establishes a 4-hour time limit on tarmac delays for international flights of U.S. and foreign airlines, subject to safety, security, and air traffic control exceptions. Like the United States, Canada, and EU also have laws, regulations, and guidance governing consumer protection for air travelers, including airline responsibilities to passengers when flight plans are disrupted. U.S., Canadian, and EU airlines generally must adhere to the passenger protection requirements of the region from which they are departing. Airlines in all three regions also have contracts of carriage in which they may provide for passenger care, compensation, or both in the event of a flight disruption. Thus, when provided for in law or in a contract of carriage, passengers may be entitled to assistance, compensation, or both from their airline when a flight delay, cancellation, or denied boarding occurs. For example, under certain circumstances, some airlines offer food and beverage vouchers during flight disruptions. Finally, international standards and agreements also govern the rights of airline passengers, but only on international flights. Notably, the Montreal Convention, adopted in 1999 and ratified by the United States in 2003, provides that passengers can bring legal action against an airline for damages associated with flight delays. The percentages of flights that are canceled or diverted have in recent years been higher to and from airports in rural communities than large metropolitan communities, according to FlightStats data. We categorized airports based on the population size of their surrounding communities to assess the extent to which flight delays, cancellations, and diversions differ by community size. Our analysis of departure cancellation and diversion trends, using FlightStats data for all reported flights, shows that, since 2005, flights from airports in rural communities (communities with less than 50,000 people) are on average about 3.5 times as likely to be canceled or diverted as flights from airports in large metropolitan communities. For example, in 2010, cancellations and diversions accounted for roughly 2 percent of flights from large, midsized, and small metropolitan communities, compared with nearly 8 percent of flights from airports in rural communities (see fig. 2). Greater cancellation rates for flights departing rural airports were matched by higher rates of cancellation for flights arriving at rural airports. (See app. II for more information). Such cancellations and diversions can lead to long overall delay times for passengers. According to one academic study, the overall average delay time for passengers on canceled flights is about 5 hours. The percentage of delayed arrivals—that is, flights that arrived at least 15 minutes late to their destination—has in recent years been higher at airports in rural communities than airports in small, medium, and large metropolitan sized communities, according to FlightStats data, though the difference is not as substantial for delays as it is with canceled or diverted flights. As shown in figure 3, while delays occurred in 19.6 percent of all reported flights systemwide in 2010, delays occurred in 21.7 percent of flights to airports in rural communities, a 2.1 percentage points or about 11 percent difference in the occurrence of delay. See appendix II for arrival and departure delay trends since 2005. Such delays can lead to longer overall trip times for passengers. According to academic research, the overall average delay time for passengers on a delayed flight is 37 minutes. DOT’s data on flight performance do not show similar disparities between rural and other airports as do the FlightStats data because many flights captured by FlightStats are not required to be reported to DOT. As a result, DOT data provides an incomplete picture of delay, cancellation, and diversion trends. DOT requires airlines with at least 1 percent of total domestic scheduled passenger service revenue to report flight performance data for flights they operate to and from reportable airports. In 2010, 18 airlines reported data, which accounted for 22 percent of all commercial airlines, 69 percent of all scheduled flights, and 85 percent of all passengers. The approximately 31 percent of flights not in DOT’s data are scheduled flights operated by airlines that are not required to report. Some of these flights are those operated by regional airlines for legacy airlines, and in general the airlines not required to report to DOT are small and tend to provide much of the service to airports in small metropolitan and rural communities. Therefore, DOT’s data do not provide a complete picture of flight performance, especially at airports in smaller communities. For example, according to DOT’s data for 2010, delays, cancellations, and diversions occurred in 19.6 percent of flights to airports in rural communities and 20.2 percent of flights to airports in large metropolitan communities. However, FlightStats’ more extensive data show a bigger difference by community size, with 27.3 percent of flights to airports in rural communities delayed, canceled, or diverted, compared with 21.6 percent of flights to airports in large metropolitan communities in 2010 (see fig. 4). Our analysis of FlightStats’ and DOT’s delay and cancellation data suggests that airlines not required to report flight performance information to DOT have higher delay, cancellation, and diversion rates than airlines that are required to report. As figure 4 shows, delay, cancellation, and diversion rates are higher, regardless of community size, when using FlightStats data, as opposed to DOT data. FlightStats data includes a greater percentage of all flights than DOT’s data, 98 versus 77 percent, and data trends are similar for similar flights within each data set. Therefore, airlines not required to report to DOT likely account for greater rates of delays, cancellations, and diversions. According to FlightStats data, in 2010, airlines that were required to report to DOT had lower delay, cancellation, and diversion rates on average than the 20 largest airlines not required to report to DOT. This information corroborates what we were told by various stakeholders, including airline officials and aviation researchers. According to stakeholders that we spoke with, these differences may exist for multiple reasons (see fig. 5). For example, airlines operating from smaller airports may have limitations that affect their on-time performance, such as their use of smaller aircraft, which can face greater restrictions during certain weather events. As the DOT Office of Inspector General has reported, airports in rural communities may have higher delay and cancellation rates because the airlines serving them may have more limited resources, such as spare aircraft and crew, at those airports than at metropolitan airports. Furthermore, when FAA institutes traffic management initiatives to meter air traffic to and from airports, airlines must choose which of their flights to delay or cancel. According to previous academic research and aviation stakeholders that we spoke with, airlines usually prioritize flights by revenue, number of passengers, aircraft size, route distance, and competition, or flight frequency. In cases where marketing airlines control operational decisions for their regional partners, the marketing airlines may disproportionately delay or cancel flights operated by their smaller, regional partners because those flights tend to be operated with smaller aircraft with fewer passengers and shorter routes with less competition from other airlines. Our analysis of two legacy airlines shows that their regional partners generally have worse on-time performance. According to FlightStats data, in 2010, two large legacy airlines canceled 1.96 percent and 1.51 percent of their own flights, compared with 2.46 percent and 2.43 percent of the flights regional airlines operated for them. While cancellations to smaller communities may inconvenience a relatively small number of passengers, they may result in long trip delays if those smaller communities have infrequent service. See appendix II for more information on sources of delay and cancellations. DOT has historically not collected flight performance information from smaller airlines because of the burden it has perceived would be placed on these airlines. Without this information, though, DOT cannot provide consumers with a complete picture of flight performance, particularly at airports in smaller communities or for smaller airlines. More comprehensive data would provide consumers with better information on airlines’ performance. Requiring airlines with a smaller percentage of the total domestic scheduled passenger service revenue, or airlines that operate flights for other airlines, to report flight performance information are two ways that would enhance DOT’s data. According to DOT officials, they have considered reducing the reporting threshold from 1 percent of domestic scheduled revenue to 0.5 percent to increase the percentage of flights captured. This change, they estimate, would require an additional 12 airlines to report to DOT and increase coverage from about 85 percent to more than 96 percent of all passengers. In its December 2009 passenger protections rule, DOT required airlines that are required to report on-time performance data to DOT to include on-time performance information on their Web sites for all flights for which their sites have schedule information. In doing so, it rejected the concern that airline publication of data from smaller code-sharing airlines on their Web sites would be overly burdensome, and also noted that flight performance information was necessary for consumers to make informed decisions when selecting flights. Since 2004, tarmac delays of more than 3 hours peaked in 2007, three years before the tarmac delay rule was implemented. The decline prior to the imposition of the rule is likely the result of a combination of factors, including fewer flights since 2007, runway and other improvements at some airports, as well as voluntary limits adopted by some airlines on how long their flights can wait on the tarmac. Tarmac delays of more than 3 hours, which occur as a plane is taxiing out of or in to an airport gate (“taxi-out” or “taxi-in”), have historically been relatively uncommon, accounting for less than 0.1 percent of all reported flights, according to our analysis of DOT data (see fig. 6). The vast majority, about 97 percent, of tarmac delays of more than 3 hours occur during taxi-out (departure), rather than during taxi-in (arrival). The majority of all tarmac delays of more than 3 hours (180 minutes) since 2004 are 4 hours (240 minutes) or less (see fig. 7). Specifically, of the 6,740 tarmac delays of more than 3 hours reported from January 2004 through September 2010, almost 83 percent (or 5,579) were for 4 hours or less. However, given the length of some of these delays and the inconvenience or even hardship they sometimes create for passengers, tarmac delays have received widespread media attention (see app. IV for examples of tarmac delays of more than 3 hours since October 2008, when DOT began collecting more data on such delays). Tarmac delays of more than 3 hours are generally clustered around certain weather events, during specific times of the year or day, and at specific airports. For example, tarmac delays of more than 3 hours most often occur during summer thunderstorms or winter storms, when airport departures are halted. According to our analysis of DOT data from January 2004 through September 2010, almost two-thirds of all tarmac delays of more than 3 hours occurred from May through September. Also, these tarmac delays tend to be clustered on a select number of days. According to our analysis of DOT data, almost 74 percent of tarmac delays of more than 3 hours, from January 2004 through September 2010, occurred on about 7 percent of the days during this time period. For example, on July 23, 2008, 113 flights were delayed more than 3 hours on the tarmac across the national airspace system during taxi-out. Tarmac delays also tend to occur in the late afternoon, when summer thunderstorms are most likely, and after delays from the morning and early afternoon are compounded. For example, since 2004, about half of all tarmac delays of more than 3 hours occurred between 3:00 p.m. and 6:00 p.m. local time. Tarmac delays are also most prevalent at airports that have high rates of delays. For example, about 55 percent of tarmac delays of more than 3 hours since 2004 occurred at just seven particularly congested airports. See appendix III for more details on these trends. DOT instituted new rules in 2010 in response to instances of passengers subject to lengthy tarmac delays, among other consumer-related problems. Since these rules took effect in April 2010, tarmac delays greater than 4 hours have been eliminated, and tarmac delays of more than 3 hours nearly eliminated, reducing the hardship of long on-board delays for some passengers. As mentioned earlier, these new rules require, among other things, that covered airlines’ contingency plans provide for adequate food and water on all flights once a flight has been on the tarmac for 2 hours, except when safety or security preclude such services. Additionally, for domestic flights, the rule requires that covered airlines should not remain on the tarmac for more than 3 hours, with exceptions for safety, security, and disruption of airport operations. Violation of these rules can result in a $27,500 per-violation fine. Since the rule went into effect in late April 2010, tarmac delays of more than 3 hours (180 minutes) have been nearly eliminated (see fig. 8). In the first 12 months since the rule went into effect, airlines reported tarmac delays of more than 3 hours for 20 flights, compared with 693 over the same period prior to the rule. Airline consumer groups we spoke with strongly support the tarmac delay rule instituted by DOT. A small number of flights have sat on the tarmac for more than 3 hours since the rule went into effect, including four that resulted in violations where airlines were warned. In the first 12 months after the implementation of the rule, DOT identified 20 incidents where flights were delayed on the tarmac more than 3 hours and determined that 11 of these did not violate the tarmac rule, 4 were violations which resulted in a warning to the airline, and 5 which are still under investigation (see app. IV for a list of these flights). Twelve of these 20 flights were canceled and none sat on the tarmac for more than 4 hours, according to DOT data. DOT has not defined, in the regulation or elsewhere, what constitutes a violation of the rule that warrants a fine, though DOT enforcement officials told us that when determining whether to assess a fine, as well as how much to assess, they consider, among other things, the nature of the violation, the harm caused to passengers, whether the delay was preventable, and the size and financial condition of the airline. According to these officials, airlines are operating under the assumption that a fine could be assessed at $27,500 per passenger because DOT’s current authority allows for penalties at up to $27,500 “per violation,” a phrase which is not defined in statute or regulation. Overall, the number of flight cancellations have increased since the tarmac delay rule was implemented, according to DOT data, though these cancellations cannot be directly attributed to the rule. Our analysis of cancellation trends examined flights during the last two summers, May through September, 2009 and 2010 because they represent equivalent periods of time before (2009) and after (2010) the implementation of the rule. Furthermore, as noted previously, the summer historically accounts for the majority of tarmac delays. While the number of scheduled flights was similar in these time periods, total cancellations increased by 5,068 (see table 1). Total cancellations as a percentage of all flights increased from 1 percent in 2009 to 1.2 percent in 2010, a 20 percent increase in the rate of cancellations. Cancellation rates also increased for the subset of flights that left the gate and then sat on the tarmac. For example, the percentage of flights that were canceled after sitting on the tarmac for between 2 and 3 hours (121 to 180 minutes) increased from 6.19 percent in 2009 to 17.34 percent in 2010. As a result of challenges rebooking passengers, such cancellations can lead to long overall delay times. According to DOT, although cancellations have increased since the tarmac delay rule was implemented, few, if any, additional cancellations can be attributed to the introduction of the tarmac delay rule. DOT’s analysis is limited, though, because it includes only a portion of all flights, considers the total number of cancellations instead of the rate of cancellation, and does not control for other factors that can affect cancellations. In a March 2011 analysis of flight cancellations from 2009 and 2010, DOT found that, for the period from May through October, the number of flights canceled after sitting on the tarmac for 2 hours or more increased by six flights from 2009 to 2010. However, as indicated in table 1, the number of flights that remained on the tarmac for more than 2 hours (121 or more minutes) declined by more than half—2,804 to 1,266—from 2009 to 2010. As a result, the rate of cancellation increased from 2009 to 2010. DOT also did not control for the other factors, such as weather, that can affect an airline’s decision to cancel a flight. When such factors are not controlled for, the observed changes in cancellations, and any associated costs, cannot be estimated. A complete consideration of the costs and benefits of the tarmac delay rule cannot be conducted without, at a minimum, controlling for these factors. Such a consideration is important because, according to the Office of Management and Budget, a fundamental indicator of a publicly acceptable rule is one in which public benefits exceed public costs. Airline and other aviation industry stakeholders that we spoke with maintained that the tarmac delay rule has changed how airlines balance the trade-off between the extent to which flights are delayed and canceled, and that this change has made flight cancellations more likely. In particular, these officials told us airlines are more often taking actions to avoid potential DOT fines, including returning flights to the gate after taxi-out and, because of crew hour limits, limited gate availability, or the severity in the underlying cause of delay, some of these flights may then be canceled. Furthermore, when flights are delayed on the tarmac, airline officials told us they are now deciding sooner than they did in the past whether to taxi back in to the gate. A majority of the U.S. airline officials we spoke with said that, once a flight is delayed on the tarmac, communications between airline officials and air traffic control officials on how to handle the delay, such as whether to wait or return to the gate now starts after about an hour (see fig. 9). According to airline officials we spoke with, uncertain taxi times for take off and the potential for million- dollar fines have made early decision making necessary because it may take a significant amount of time for a flight to return to the gate, if necessary. Additionally, within 2 hours, airlines must provide food and water. Airline officials also told us that when flights have been on the tarmac for 2 hours, the pilots begin executing a plan for either takeoff or a return to the gate within the hour. According to one airline official, this plan must then be carried out unless they are told by air traffic control that takeoff is imminent. Officials from one airline told us that their decision to return to the gate is sometimes put into action before the flight has been on the tarmac for 2 hours. As a result, airlines are returning more flights to the gate prior to takeoff. Our analysis of DOT data found that the number of flights returning to the gate after waiting on the tarmac for at least an hour has increased by almost 9 percent from May through September 2009 to May through September 2010, although it is not possible to definitively attribute these changes solely to the tarmac delay rule. In addition to stating that the tarmac delay rule is altering their decision making during a tarmac delay, airlines maintain that the rule has increased the likelihood that they will cancel a flight before it ever leaves the gate. For instance, airline officials told us that they are precanceling more flights prior to the scheduled departure time when long tarmac delays are possible, such as during severe weather, than they did in the past. According to an official from one airline, its precancellations have increased by 10 percent since late April 2010, when the rule went into effect. When canceling a flight before passengers have boarded the plane, airlines have more control over where they position crew and aircraft to resume normal operations the following day. According to one major airline, precanceling also benefits flight crews and airport employees because it gives airlines, airports, and passengers greater flexibility in rescheduling flights, work, and personal activities. Since a variety of factors in addition to the tarmac delay rule may be correlated with airline cancellation decisions, we developed logistic regression models that control for several factors that are likely to be associated with these decisions in order to measure the likely effect of the tarmac delay rule. We used two models to analyze cancellations. In the first (the tarmac-cancellation model), we assessed the likelihood of cancellation for all flights that taxi-out from the gate. In the second (the gate-cancellation model), we assessed the likelihood of cancellation for flights before they leave the gate. Our analysis examined flights during the last two summers (May through September 2009 and 2010) because DOT began collecting more extensive data on tarmac delays that is necessary for this analysis in October 2008 and, historically, the majority of tarmac delays occur in the summer. Both models control for several factors that are likely to influence airlines’ decisions about whether to cancel flights, including weather at the origin and destination airport, airline characteristics, and specific details of individual flights. Nevertheless, other factors related to cancellations may not have been fully controlled for. Additionally, since we used a variable indicating the year as a proxy for the implementation of the rule in late April 2010, other general changes in the environment across these two years which affect how airlines decide whether to cancel a flight may not be fully reflected in our model. See appendix V for a detailed discussion of the model structure, a full list of independent variables, and our full results. Results from the tarmac-cancellation model suggest that the implementation of the tarmac delay rule is associated with a greater likelihood of cancellation for flights that taxi-out onto the tarmac. Furthermore, our results suggest that the greater likelihood of cancellation increases with the time a plane stays on the tarmac. As shown in table 2, we grouped flights into hour-long intervals, and for each group the likelihood of cancellation has increased since the rule went into effect. This correlation of the rule’s implementation with increased cancellations appears consistent with what airlines have told us has happened. Results from the gate-cancellation model also indicate that the tarmac delay rule is associated with a higher rate of flight cancellation. In particular, when the model controlled for other factors that may be associated with an airline’s decision to cancel a flight, the likelihood of a gate cancellation was 24 percent higher during May through September 2010 than it was for the same months in 2009 (see table 2). The gate- cancellation model also controlled for the same factors as the tarmac model except for minutes on the tarmac. For both models, the tarmac delay rule as well as the other factors we included generally had the expected, and statistically significant, association with cancellations. Passenger protections requirements for flight delays, cancellations, and denied boarding are overall more extensive in the EU than they are in the United States or Canada. While all three regions have enhanced passenger protections in recent years, EU care and compensation guarantees are generally more extensive than those in the United States or Canada. (Table 3 summarizes what airlines are required to provide passengers for flight delays, cancellations, and denied boardings in the three regions.) In April 2011, U.S. DOT further enhanced its airline passenger protections by, among other things, increasing financial compensation in the event of an involuntary denied boarding. The Canadian Minister of Transport, Infrastructure and Communities launched “Flight Rights Canada” in September 2008 to increase air passenger’s awareness of their rights, which includes a voluntary “Code of Conduct of Canada’s Airlines” which, among other things, recommends Canadian airlines adopt specific provisions related to flight disruptions in their contracts of carriage. In the EU, a regulation enacted in 2004 entitles passengers to care and compensation, under specific circumstances, for all three types of disruptions. Officials from the European Commission (Commission) told us that these rules harmonized levels of customer service across all EU member states and airlines, ensuring that passengers can expect to be cared for and compensated if their flight is canceled or seriously delayed or if the passenger is denied boarding. Before the current regulation was put into place, according to these European officials, some airlines were increasingly overbooking flights, while providing little care or compensation to those inconvenienced passengers who were denied boarding. The officials said that the goal of the regulation was not to punish airlines for delays or cancellations, or even necessarily reduce the number of disruptions, but rather to make passengers “whole” when flights are disrupted. In the event of a flight delay, the EU regulation requires that airlines offer passengers care and, under certain circumstances, the option of reimbursement or a return flight to the first point of departure, while there are no U.S. and Canadian requirements with similar levels of care or compensation. Under EU regulation, when a flight is delayed at least 2, but less than 5 hours (depending on the distance of the flight), airlines are required to provide passengers with certain types of care, including meals and communication services, and if the delay requires an overnight stay passengers must be offered hotel accommodations and transportation between the airport and hotel. Furthermore, if the delay is at least 5 hours, passengers must also be offered reimbursement for the unused portion of their ticket (and for the part of the journey already made if the flight no longer serves its original purpose) and, if necessary, a return flight to the point of departure. They must also be given written notice of the rules for care and compensation. By comparison, passengers on delayed flights in the United States and Canada are not entitled to care or compensation by law. In the EU, when a flight is canceled, the EU regulation requires that passengers receive care in certain circumstances, compensation, and the option of being rerouted or reimbursed (with a return flight to the point of departure), while passengers in the United States and Canada do not have such extensive rights. Passengers on canceled flights covered by the EU regulation are entitled to the same rights as those passengers on delayed flights (as described previously) and, additionally must be offered the choice between being rerouted or reimbursed for part or all of their ticket, depending on the circumstances along with a return flight to the first point of departure at the earliest opportunity. In addition, passengers on such flights are entitled to financial compensation, the amount of which depends on the length of the canceled flight and may be reduced by 50 percent if the passenger is rerouted, under certain circumstances. An airline may be exempt from the obligation to pay compensation if it can prove the cancellation was caused by an extraordinary circumstance that could not have been avoided even if all reasonable measures had been taken. At the time a flight is canceled, the airline must provide passengers written notice of the rules for compensation and assistance. By contrast, U.S. rules do not require care and compensation in the event of a cancellation, but do, require airlines to offer passengers a refund if they do not wish to accept and travel alternative routes to their destinations. In Canada, passengers are not entitled to care or compensation in the event of a cancellation, nor is there a specific requirement that an airline refund a passenger their ticket price, in whole or in part. If a passenger is involuntarily denied boarding—generally because an airline has oversold seats on a flight and cannot find enough volunteers willing to take another flight—the passenger may be entitled to benefits, depending on the region. Passengers on flights covered by the EU regulation have a right to care and financial compensation. Comparatively, passengers covered under U.S. regulations are entitled to financial compensation and passengers in Canada are entitled to neither care nor compensation except as provided under their contracts of carriage. To limit the number of passengers who are involuntarily denied boarding when a flight is oversold, airlines in the United States and the EU are required to first request volunteers to relinquish their confirmed space in exchange for benefits, such as credit for future travel, before selecting passengers for denied boarding. When selecting passengers for denied boarding, U.S. airlines are required to use boarding priority rules that are in compliance with DOT regulations. Under both U.S. and EU requirements, those passengers selected for denied boarding must be offered financial compensation. Passengers in the EU also are guaranteed the same care offered to passengers whose flights are delayed or canceled and must also be offered the option of reimbursement, in whole or in part, dependent upon the circumstances, and a return flight to the first point of departure or rerouting the final destination. In both regions, airlines must notify passengers in writing of their rights. Some airlines in the United States and Canada, as described earlier, voluntarily include provisions in their contracts of carriage for care and compensation beyond what is legally required for delays, cancellations, and denied boarding. These provisions are enforceable as a legal contract between the airline and the ticket holder. The airlines we spoke with in the EU do not include any additional care or compensation beyond the EU requirements discussed previously and the EU regulation does not require airlines to include those requirements in their contracts of carriage. We examined the contracts of the seven largest airlines in the United States and found that five of these airlines may, in certain circumstances, provide certain types of care, such as meal vouchers and free phone calls, for delays and cancellations that extend beyond a certain time (see table 4). Certain airlines also state in their contracts of carriage that they must also provide hotel accommodations and ground transportation, under certain circumstances, when an overnight stay is required. The circumstances under which airlines provide these amenities vary and may depend on a number of factors, such as the cause, length, and timing of the flight disruption. All four of Canada’s major airlines have added passenger protections for delays, cancellations, and denied boarding in response to the 2008 federal government initiative mentioned above, according to airline officials in Canada. Although airlines are not required to adhere to passenger protection provisions outlined in the initiative’s Code of Conduct, because all four of Canada’s major airlines have added its provisions to their contracts of carriage, these become part of the binding contract between the airline and the passenger. As a result, these airlines now guarantee in their contracts of carriage that they will provide passengers with a meal voucher if a flight delay exceeds 4 hours, as well as hotel accommodations and ground transportation if a flight delay exceeds 8 hours and requires an overnight stay. If a flight is canceled or a passenger is denied boarding, the airlines will rebook passengers or refund the unused portion of the ticket. Flight Rights Canada’s Code of Conduct does not make the airline responsible for acts of nature or the acts of third parties. Care and compensation requirements provide protections and benefits for passengers whose flights are disrupted, but they also increase costs to airlines and could increase passengers’ fares. Airline officials we spoke with in the EU and the United States maintained that passenger protections increase their costs, though they did not provide documentation of specific cost increases because they consider the information to be confidential. While data from airlines on these costs are unavailable, a February 2010 study of the EU passenger protection requirements noted that airlines the study authors spoke with reported the cost of compliance for EU airlines ranges from 0.1 to 0.5 percent of airlines’ annual revenue. However, officials from one European airline, as well as officials from an airline association, maintained that airlines’ cost of compliance exceeds this estimate. Increases in required compensation for passengers denied boarding have also increased costs for both U.S. and EU airlines, according to airline officials we spoke with. In the United States, officials with some U.S. airlines told us that complying with the requirements to better inform passengers about routinely delayed and canceled flights and to post information such as flight on-time performance data on airline Web sites costs hundreds of thousands of dollars. Officials with one of these airlines estimated that airline personnel spent about 3 months adding the information at the airline’s Web site. Some airlines in the United States and the EU told us that compliance costs such as these can lead to higher fares. However, it is very difficult to isolate the impact of compliance on fares because they are set based on demand in competitive markets as well as other factors. Passenger protections can create financial burdens on airlines for major events outside their control. For example, as noted above, airlines subject to EU regulations are required to provide certain care in the event of a delay or cancellation, regardless of whether the disruption was within the airline’s control. These regulations require an airline to provide passengers with food, lodging, and other care, dependent on the circumstances, during short-term disruptions in travel plans. However, when major disruptions to the airspace system occur, this requirement can obligate airlines to provide passengers with lodging and other care for extended periods of time at great cost. Such a situation occurred in 2010 when the European air transport industry was significantly affected by the consequences of the Eyjafjallajökull volcanic eruption in Iceland. The volcano, which erupted on April 14, 2010, created a cloud of volcanic ash that drifted through large sections of European airspace. Volcanic ash contains substances that may harm aircraft, so national authorities decided to close affected airspace. As a result, more than 100,000 flights were canceled and millions of passengers were unable to fly. In many cases, the passengers were stranded in a foreign country without any immediate possibility to go back home. Representatives of one EU airline told us that when the eruption occurred, they booked more than 100,000 hotel rooms for their scheduled passengers and eventually chartered aircraft to get passengers to their destinations. The airline’s representatives estimated that the incident cost the airline about $4.5 million. Major disruptions generally result from unsafe flying conditions. According to airline officials in both the United States and EU, the possibility of large monetary claims as a result of such incidents could pressure airlines to operate in conditions they would otherwise deem unsafe for flight in order to avoid high costs, but according to Commission officials there are no available data on the existence or extent of this issue. While increasing the compensation for denied boarding will increase airline’s costs if airlines don’t change their booking policies, reducing overbooking reduces revenues because fewer seats can be sold, according to airlines officials we interviewed. Overbooking is a revenue- producing strategy for many airlines, without which some would raise fares to offset their losses. Additionally, airline officials said that reductions in overbooking could also limit the flexibility of passengers when choosing flights, as seat availability would be reduced and airline policies governing how and when passengers change their flights could become more restrictive. However, we found little evidence that increases in denied boarding compensation in the United States resulted in reduced overbooking. According to airline officials we spoke with, the 2008 compensation increase in the United States was not large enough to cause airlines to reduce their overbooking of flights. Additionally, from 2004 through 2010, the number of voluntary denied boardings in the United States was less than 0.1 percent of all U.S. passengers boarded annually, while the number of involuntary denied boardings rose slightly but remained rare in relation to the total number of U.S. passengers, at 0.01 percent of all U.S. passengers (see fig. 9). In contrast, EU denied boarding compensation, though in some cases less than U.S. levels, has been significant enough to cause at least two EU airlines to reduce overbooking of flights, according to officials from these airlines. According to these officials, this reduction in overbooking has adversely affected consumers through higher average ticket costs designed to offset the increased number of unused seats on each flight. However, data showing whether any such reductions in overbooking have caused EU airlines to increase their fares are not available. Extensive passenger protections, while providing benefits and guarantees to passengers, can create challenges for the government entities responsible for enforcing the requirements and for passengers in obtaining benefits due to them. These challenges include difficulties enforcing unclear requirements and ineffective passenger complaint processes. Such challenges can limit the potential for the requirements to mitigate hardships for airline passengers. Government enforcement bodies in each region are responsible for ensuring that airlines comply with their region’s requirements. DOT and the Canadian Transportation Agency (CTA) serve as the enforcement bodies for the United States and Canada, respectively. In the EU, each of its 27 member states, and other countries that joined the EU aviation market (such as Iceland, Norway, and Switzerland), establishes its own body responsible for enforcing the EU regulation, which is typically the agency responsible for aviation oversight. These enforcement bodies use similar activities to monitor airline compliance, including investigating passenger complaints and issuing penalties against airlines for noncompliance. Enforcement bodies in each region receive passenger complaints or information (for example, through a media report) about a possible violation of passenger protections and decide whether to investigate. DOT officials told us they will investigate any case alleging a violation of a DOT rule, but will generally only pursue an enforcement action against airlines if they discover a pattern or practice of violations or the incident is particularly egregious. CTA officials and an enforcement body official from one EU member state told us they investigate and may pursue enforcement actions against an airline based on an individual’s complaint. If officials determine that an airline has violated passenger protections, they may fine the airline, depending on the region or the member state. In addition to conducting investigations based on passenger complaints, enforcement bodies in each region initiate investigations. For example, DOT officials told us they routinely investigate each major airline and their investigations have resulted in the collection of fines. In two EU member states, officials from the enforcement bodies told us they visit airports to see if airlines are displaying required information about passenger protections, but have not issued fines. The first challenge to the effective application of passenger protections arises when there is a lack of clarity in regulations. In the EU, where passenger protection regulations are more extensive than in the United States or Canada, officials from the Commission told us that different interpretations of these regulations by enforcement bodies in different member states have made it challenging to ensure successful implementation of the regulation. A 2010 study for the Commission about the impact of the EU passenger protection regulation found that more needs to be done to ensure that passengers’ rights are properly protected. In particular, the study noted that in some areas the rights granted by the regulation can lead to different understandings. The Commission also recently reported that “the novelty of some provisions of the Regulation has led to different interpretations, and thus varied application, among airlines and national enforcement authorities, rendering it difficult for passengers and stakeholders to understand the scope and limits of the rights set out.” Stakeholders told us, for example, that the following two provisions were unclear and confusing to implement, respectively:  Unclear definition of extraordinary circumstances. According to some airlines, airline associations, and consumer groups we spoke with in the EU, the definition of this term—which refers to situations in which airlines are exempt from the passenger compensation requirement when a flight is canceled—has left room for confusion. A recent ruling by the European Court of Justice (ECJ) provided some clarification for enforcement bodies when it ruled that technical issues, such as an airplane malfunction, may constitute an extraordinary circumstance only when these issues stem from events outside the normal activities of the airline and are beyond its control. Even so, some enforcement bodies are interpreting this ruling differently. For example, officials from one enforcement body told us that even if a technical issue is routine, they may still consider it an extraordinary circumstance if they believe the safety risks were too great, whereas other enforcement bodies in the EU have interpreted the ECJ’s ruling more strictly. Additionally, some stakeholders that we spoke with told us that the extraordinary circumstance provision in the regulation should be revised to restrict the amount of assistance an airline must provide to passengers or to identify an extensive list of scenarios under which the airline would be exempt from the passenger compensation requirement. For example, officials from an airline and an airline association told us that they believe the regulation should be amended to exempt airlines from paying for weeks of hotel accommodations and food (not just compensation) in response to major disruptions, such as the Eyjafjallajökull volcanic eruption. In a recent report, the Commision stated this incident illustrated the structural limits of the regulation and that the “proportionality of some current measures, like the unlimited liability regarding the right to care under major natural disaster, may merit assessment.”  Confusion over definition of delay. Uncertainty about when compensation is required for delays and cancellations has also created enforcement challenges. In the EU, a November 2009 ruling of the ECJ specified that passengers whose flights are delayed more than 3 hours experience the same inconvenience as those whose flights are canceled and therefore both should be entitled to the same financial compensation payments from airlines. This ruling created confusion in member states and within the industry as to when to compensate passengers who have been delayed. Airline and some airline association officials told us that this ruling contradicts the text of the regulation, which, requires reimbursement (in part or in full), not compensation, in the event of a delay of more than 5 hours. In the United Kingdom (UK), according to Commission officials, the International Air Transport Association, among others, filed a suit in the UK Court of Justice against the UK’s enforcement body’s policy to compensate passengers in line with the ECJ ruling. An official from the UK’s enforcement body told us that the UK Court of Justice submitted questions of law stemming from this case to the ECJ and until the ECJ responds with further clarification, the enforcement body has suspended all investigations into complaints on the topic. Uncertainties over these provisions may make it difficult for airlines and passengers to know when an airline must compensate its passengers. The challenges arising from the lack of clarity pertaining to passenger protection regulations, such as the confusion about the definition of delay, can be exacerbated when the EU requirement is applied unevenly across jurisdictions. For instance, the enforcement of EU regulations has been complicated because member states have flexibility in structuring their enforcement to account for differences in their national laws and policies. As a result, enforcement bodies in the Netherlands and Germany, for example, use different sanction strategies for ensuring that the airlines comply with the regulation, resulting in varying types and amounts of penalties for airlines. In particular, the types of sanctions and amounts of sanctions these enforcement bodies can impose differ because of laws and policies specific to their member state. Officials from the Netherlands’s enforcement body told us they can impose only reparatory sanctions, which prevents them from collecting a fine if the airline makes amends with the passenger, while the enforcement body in Germany can issue repressive sanctions, which can be imposed regardless of whether the airline makes amends with the passenger. The amount of a sanction also differs between the two member states. For example, the Dutch enforcement officials told us that there is no set amount, but it must be reasonable and proportionate to the severity of the violation, while in Germany, officials from the enforcement body told us that the amount of a sanction is based on the seriousness of the complaint. Different national laws affect the circumstances in which sanctions can be issued. For example, the German officials told us that German law prohibits them from considering ECJ decisions, such as the ruling that passengers who are delayed more than 3 hours should receive the same compensation as those whose flights are canceled, and therefore the German enforcement body is not using the same standards as other enforcement bodies to sanction airlines. The second challenge to the application of passenger protections arises when there is a lack of an effective passenger complaint process. The enforcement processes of the EU, as well as those of the United States and Canada, demonstrate challenges passengers can face obtaining benefits due to them. When passengers in the United States, Canada, or the EU do not receive benefits to which they believe they are entitled, they can submit a complaint to any or all of three entities: the airline, the national enforcement body, or the court system (see fig. 11). However, according to government officials, passengers in the United States and EU can receive financial compensation only through the airline or the court. The enforcement bodies in these regions cannot award passenger compensation because their authority does not extend to enforcing payment by the airlines. The 2010 study for the Commission of the regulation reported inconsistent implementation and enforcement of the regulation across enforcement bodies and airlines. According to the study, airlines and consumer groups reported that there are a number of difficulties associated with passengers in the EU seeking compensation in a court, including the costs, time burden, availability of small claims courts, and limits on amounts awarded. In the EU, according to Commission officials, passengers may pay for legal assistance when pursuing compensation in the courts from a variety of sources, such as from a consumer protection organization. In some member states, passengers can also use the commercial claim service EU Claim, but passengers must pay for these services with a percentage of what they are awarded. Officials from one consumer group told us that when barriers are imposed on passengers claiming their benefits for violations of their rights, airlines may not comply with applicable requirements. Furthermore, despite a number of EU government-sponsored campaigns to inform passengers of their rights, several EU stakeholders told us passengers may still not be aware of their rights and therefore may not submit complaints if they believe their rights have been violated. Additionally, officials from two consumer protection groups in the EU told us that some passengers may be confused about their rights under the EU regulation and some airlines may use that confusion to their advantage. Flight disruptions remain costly for passengers, airlines, and the economy. DOT has responded by enacting regulations to protect passengers in the event of tarmac delays and has enhanced involuntary denied boarding protections. DOT’s tarmac delay rule has eliminated delays greater than 4 hours and nearly eliminated tarmac delays of more than 3 hours, thereby benefiting tens of thousands of passengers. Increased compensation for involuntary denied boardings provides for passengers in the event they are bumped from their reserved flight. Although DOT’s rules have benefited some passengers, DOT’s current flight performance data may not fully inform consumers of airlines’ quality of service as intended. By collecting data only from the largest airlines, DOT does not obtain and therefore cannot provide consumers with a complete picture of flight performance, particularly at airports in rural communities or among smaller airlines. Accurate flight performance information is necessary for consumers to make informed decisions when purchasing airline tickets. Additional information and analysis are also needed to fully understand the effects of the tarmac delay rule on passengers. Since the rule went into effect, tarmac delays of more than 3 hours have been nearly eliminated, with no delays of more than 4 hours, reducing the hardship for numerous passengers. However, as our analysis has shown, the rule appears to be associated with an increased number of cancellations for thousands of additional passengers—far more than DOT initially predicted—including some who might not have experienced a tarmac delay. Though it is difficult to know how passengers might choose between a long tarmac delay and a cancellation, and what costs and burdens their choices would entail, determining the net benefit to airline passengers resulting from the rule and assessing whether there is a causal relationship between the rule and any changes in flight cancellations will be critical to passengers and airlines. Additionally, our analysis could only include data from the first summer of the rule’s implementation, so using data through the summer of 2011 may yield useful information for policymakers. In determining the impact of the rule, it is important to include both the positive effects of reducing long on- board delays and the negative effects of flight cancellations on passengers. Increases in cancellations may be at least partly due to airlines’ assumptions about the significant enforcement penalties that could result from a violation of the rule. Although DOT could issue guidance on their penalty structure, it has chosen not to in order to maintain flexibility under their current authority. To enhance aviation consumers’ decision-making, we recommend that the Secretary of Transportation take the following action:  Collect and publicize more comprehensive on-time performance data to ensure that information on most flights, to airports of all sizes, is included in the Bureau of Transportation Statistics’ database. DOT could accomplish this by, for example, requiring airlines with a smaller percentage of the total domestic scheduled passenger service revenue, or airlines that operate flights for other airlines, to report flight performance information. To enhance DOT’s understanding of the impact of tarmac delays and flight cancellations, we recommend that the Secretary of Transportation take the following action:  Fully assess the impact of the tarmac delay rule, including the relationship between the rule and any increase in cancellations and how they effect passengers and, if warranted, refine the rule’s requirements and implementation to maximize passenger welfare and system efficiency. We provided a copy of the draft report to DOT for review and comment. Senior officials at DOT, including the DOT assistant general counsel for aviation enforcement proceedings, provided general comments in an e- mail representing DOT’s views on the benefits of the tarmac delay rule, but did not provide written comments on the recommendations. In its general comments, DOT stated that, in its view, available data demonstrate that the tarmac delay rule provided effective consumer protection for airline passengers. DOT officials believe that the rule made clear to airlines that, whatever the rationale, it is not acceptable to leave passengers in aircraft stranded on the ground for hours on end. Specifically, DOT officials cited data that demonstrate the rule’s effectiveness in preventing extended tarmac delays, including the elimination of tarmac delays in excess of 4 hours, which dropped from 105 flights to zero for the year ending April 2011, completely eliminating these most egregious of delays. Officials also highlighted the 98 percent drop in 3-hour delays, from 693 flights to 20, during the same period. DOT officials believe that these results demonstrate the positive impact of the tarmac rule, and that without it, far more passengers would have been subject to these extended delays. In response to DOT’s general comments, we made changes to the report to better clarify our findings. DOT officials said that the information in our report, in their view, further demonstrates that airlines have gotten the basic message of the rule and that it has been effective at putting consumers first when it comes to avoiding lengthy tarmac delays. They cited our discussion of actions airlines are taking to avoid tarmac rule violations, including acting more quickly to address delayed flights and moving more quickly back to gates, affording passengers the freedom to access the amenities of air terminals. They were also pleased to see our finding that air carriers are working to comply with DOT requirements to provide food and water to passengers delayed on the tarmac for extended periods of time. Finally, DOT reinforced its commitment to monitor the effects of the tarmac delay rule to ensure it is achieving intended outcomes and to address any significant unintended outcomes. DOT initially focused on comparing the number of flights with 2-hour tarmac delays that are eventually canceled because in its view this was the best measure of the effect on cancellations from the rule. According to DOT, they recently selected a contractor to conduct a comprehensive independent review and analysis of the impact of the tarmac delay rule now that a full year of data is available. DOT believes that, at minimum, one year of data is necessary to assess the rule’s effects. DOT’s review will consider on-time performance, cancellations, benefits to consumers, and other relevant information covering the period back to 2000, to assess the rule’s impact on flight delays, cancellations, and consumers. DOT also provided technical comments, which were incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days from the report date. At that time, we will send copies to interested congressional committees and the Secretary of Transportation. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. In this report, we examined how (1) trends in and reasons for flight delays and cancellations in the United States differ for smaller and larger communities; (2) the Department of Transportation’s (DOT) tarmac delay rule has affected passengers and airlines; and (3) requirements and practices for protecting passengers from flight delays, cancellations, and denied boardings in the United States, Canada, and the European Union (EU) have affected passengers and airlines. To identify and compare the trends in and reasons for flight delays and cancellations in different-sized U.S. communities, we examined trends at airports designated as primary in fiscal year 2009. From this group of 367 airports, we excluded the 12 primary airports in U.S. terrorities because they operate in different operational environments than other U.S. airports. We then categorized the 355 airports by the size of their surrounding community. We used geographic information system data on the airports’ location and surrounding population. Airports were mapped by the county they are located in and grouped into one of four categories based on population: 1,000,000 or greater (large metropolitan), 250,000 to 999,999 (midsized metropolitan), 50,000 to 249,999 (small metropolitan), and less than 50,000 (rural). This approach controls for the fact that some small or medium airports—generally secondary airports such as Hobby Airport in Houston—are actually in large metropolitan regions. Using these categories, 78 airports were in large metropolitan communities, 100 were in midsized metropolitan communities, 122 were in small metropolitan communities, and 55 were in rural communities. To analyze flight delay, cancellation, and diversion trends for these different airport community size categories, we first obtained data from DOT. These data were drawn from the Airline Service Quality Performance System (ASQP), which includes information about flight delays, cancellations, and diversions. ASQP data are based on information filed by airlines each month with DOT’s Bureau of Transportation Statistics (Office of Airline Information). Airlines with 1 percent or more of total domestic scheduled passenger service revenue are required to report data for their flights involving any airport in the 48 contiguous states that account for one percent or more of domestic scheduled service passenger enplanements. We then compared the percentage of flights that were delayed, canceled, and diverted by community size, by year. Since DOT does not require all airlines to report on-time performance information, we also purchased data from FlightStats, a private data source from Conducive Technology that records flight performance information for nearly all airlines and airports. We then conducted the same analysis of delay, cancellation, and diversion trends by airport community size as we did with ASQP data. We also verified DOT and FlightStats data as comparable for similar categories of flights. ASQP and FlightStats did not have data for all 355 primary airports subject to our examination, because some airports may not have been considered primary in other years of our analysis (very small airports may drop below or rise above the 10,000 enplanements threshold year to year). Furthermore, some airports may have more than 10,000 enplanements annually but generally not be commercial-service airports. For instance, some military airports may have commercial flights diverted to them on occasion. As a result, our analysis of the year 2010, for example, included data for 281 airports using ASQP data and for 344 airports using FlightStats data. Using our community size categories, 76 airports were in large metropolitan communities, 99 were in midsized metropolitan communities, 117 were in small metropolitan communities, and 52 were in rural communities when using FlightStats data. We also examined trends in sources of delay and cancellation, based on DOT’s ASQP data as previously described, and compared these trends by airport community size. FlightStats does not record the sources or reasons for delays or cancellations. To assess the reliability of ASQP and FlightStats data, we reviewed documentation related to both data sources, and interviewed knowledgeable officials at DOT and Conducive Technology about the data. We also compared data for the same categories of flights in both DOT and FlightStats databases, where possible, and found that they were similar. After excluding certain ASQP flight records for our analysis of tarmac delay trends, we determined that both ASQP and FlightStats data were sufficiently reliable for the purposes of this report. To better understand the reasons for any differing trends in and sources of flight delays and cancellations, we reviewed a DOT Office of Inspector General report and interviewed aviation industry experts, consumer groups, industry associations, and representatives of three U.S. legacy airlines and three low-cost airlines. For the U.S. airlines, we selected three legacy airlines that served more than two-thirds of all legacy airline passengers from 2004 through June 2010 and three low-cost airlines that served more than 80 percent of all low-cost airline passengers from 2004 through June 2010. These six airlines served about half of all passengers enplaned on U.S. airlines from 2004 through June 2010. See table 5 for a list of aviation industry stakeholders, including airlines, interviewed for this report. To assess how DOT’s tarmac delay rule has affected passengers and airlines, we first examined DOT data on tarmac delay and cancellation trends since 2004. In order to identify the frequency of tarmac delays over time, we used DOT’s ASQP data to identify all flights with tarmac delays greater than 3 hours from January 2004 through September 2010. We then analyzed these flights by year; month; time of day; and type of tarmac delay, such as taxi-in and taxi-out delays (see app. III for more information on these trends since 2004). To better understand the effect of the tarmac delay rule on the likelihood of flight cancellations, we assessed cancellations in two contexts. In the first, we assessed the odds of a flight being canceled after it leaves the gate. In the second, we assessed the odds of a flight being canceled before it leaves the gate. In order to isolate the effect of the tarmac delay rule, we analyzed flight data using models that controlled for a variety of factors that can affect an airline’s decision to cancel a flight. Specifically, we used logistic regression models to estimate the impact of the tarmac delay rule on cancellations. Using these models, we were able to control for other factors that may affect the likelihood of a cancellation, including weather at the origin and destination airport, airport and airline characteristics, and specific details of individual flights. Disruptive weather is a major cause of cancellations, so by including variables in our model for severe weather events, we were better able to isolate the rule’s correlation with cancellations. Further, the size of the particular airport, as well as the size and business practices of airlines, influence cancellation decisions, so we controlled for certain characteristics of airports and airlines. (See app. V for more details on our models.) To verify the strength of our model, we discussed the models’ design and preliminary results with aviation experts Professor Mark Hansen of the University of California and Professor Lance Sherry of George Mason University. We also spoke with representatives of U.S. airlines, industry associations, consumer groups, and DOT about the impact of the tarmac delay rule, including changes to airline practices. Finally, to determine how the requirements and practices for protecting passengers from flight delays, cancellations, and denied boardings in the United States, Canada, and the EU have affected passengers and airlines, we examined the laws, regulations, international agreements, and voluntary commitments governing passenger protections in the three regions. In particular, we reviewed applicable DOT regulations, Regulation (EC) 261/2004, and relevant provisions of Canada’s Air Transportation Regulations and the Montreal Convention. Additionally, we examined government guidance and proposals for additional passenger protections, including the Flight Rights Canada Initiative, European Commission guidance for enforcement bodies, and Canada’s proposed Air Passenger’s Bill of Rights. To describe voluntary passenger protections offered by airlines, we reviewed the contracts of carriage for nine largest U.S. airlines based on recent Federal Aviation Administration (FAA) data on the number of available seat miles. We also spoke with airline officials from three airlines in Canada and officials from three European airlines. To further examine the affect that passenger protection regulations have had on airlines and passengers, we interviewed airline, industry association, consumer group, and government officials throughout all three regions. We also assessed DOT data on denied boardings from 2004 through 2010. To document how regions enforce passenger protection requirements differently, we visited and spoke with stakeholders in Canada and in the EU, which were selected based on stakeholder comments and a review of a recent EC study on the implementation of the EU regulation. In the EU, we selected The Netherlands and Germany because each has a large aviation market as well as active and effective enforcement practices, but which employ different strategies. See tables 6 and 7 for a list of stakeholders we met with in Canada and the EU. This appendix provides additional information and illustrations of flight delay, cancellation, and diversion trends from 2005 to 2010, based on our analysis of FlightStats data. It also provides information on airline- reported sources of delays and cancellations, based on our analysis of DOT data. This appendix provides additional information and illustration of tarmac delays of more than 3 hours, from January 2004 through September 2010, including how the tarmac delays that occurred during this period were distributed by year, month, airport, day of the week, and hour. This appendix also provides information on airline-reported sources of tarmac delays. This information is based on our analysis of DOT data. Beginning in October 2008, DOT required airlines to submit data on flights with tarmac delays that were subsequently canceled, diverted, or had multiple gate departures (see table 12). Previously, DOT had only captured tarmac delays that occurred during taxi-out or during taxi-in. While the majority of tarmac delays happen at taxi-out or taxi-in, the change in reporting captured data for some additional tarmac delays of more than 3 hours. As a result of these new reporting requirements, tarmac delays are now captured  during taxi-out: the time between when a flight departs the gate at the origin airport and when it lifts off from that airport (wheels-off);  during taxi-in: the time between a flight touching down at its destination airport (wheels-on) and arriving at the gate;  prior to cancellation: flight left the gate but was canceled at the origin  during a diversion: the tarmac time experienced at an airport other than the destination airport; or  as a result of a multiple gate departure: the flight left the gate, then returned, and then left again; the tarmac time is the time before the return to the gate. This appendix describes two models that we designed to assess whether DOT’s tarmac delay rule is correlated with an increase in airline cancellations. Both models use data for the same months before and after the rule went into effect to analyze whether and how a variety of factors—including the imposition of the rule—are associated with the likelihood (or odds) that a flight will be canceled. One model analyzes the likelihood of cancellation after a flight has left the gate and gone onto the tarmac; the other analyzes the likelihood of cancellation at the gate. Specifically, this appendix discusses (1) the incidence of cancellations since the rule’s implementation, (2) the conceptual framework for examining these issues through modeling, (3) variable calculations and data sources, and (4) the models’ results. To examine the incidence of flight cancellations before and after the tarmac rule’s implementation, we collected data on flights for May through September in 2009—before the rule went into effect—and for the same months in 2010, after the rule’s implementation. We examined the incidence of cancellation for flights that were canceled after they left the gate and went onto the tarmac, and for flights before they left the gate. The data cover flights reported to the Bureau of Transportation Statistics (BTS) at 70 airports in the continental United States. Table 14 provides information, for the time frame of this analysis, on the number of flights in each time period that left the gate and took off, and the number of flights that left the gate but eventually were canceled. From that information we calculate the odds of cancellation in each of the two years. These odds equal the number of flights that were canceled divided by the number of flights that were not canceled. For example, in 2009, the odds of cancellation is (808/1,868,189), which equals 0.000433. Thus, roughly 4 out of every 10,000 flights that exited the gate were ultimately canceled in that year. Finally, we calculated the odds ratio of a flight being canceled in 2010 compared with 2009, which is a ratio of the odds of cancellation in 2010 to the odds of cancellation with 2009. The data show that flights are rarely canceled after leaving the gate. In both years, a very small fraction of flights that left the gate were ultimately canceled. As noted, in 2009 roughly 4 flights (that left the gate) were canceled for every 10,000 flights that took off. However, the odds of cancellation for a flight that has left the gate did appear to rise in 2010 compared with 2009. The odds ratio is the odds of a tarmac cancellation in 2010 divided by the odds of such a cancellation in 2009. The odds ratio exceeds 1, indicating that cancellations were more likely to occur in 2010. Specifically there was about a 24 percent increase in the odds of cancellation in 2010 compared with a year earlier. Because we hypothesized that the likelihood of cancellation for a flight that has left the gate may be greater the longer it sits on the tarmac, we assessed the odds of cancellation based on how long a flight sits on the tarmac, as shown in table 15. These data reveal that in both 2009 and 2010, the odds of cancellation rise substantially for flights that have been on the tarmac for longer periods of time. For example, in 2009 the odds of cancellation for flights on the tarmac 60 minutes or less are only a small fraction of a percent, but for flights on the tarmac for 121 to 180 minutes, the odds rise substantially to 6 percent in that year. Using these odds, we calculate odds ratios showing the relative odds of cancellation for each hour category compared with the base hour (up to 1 hour of delay), within each year. As shown, the odds ratios rise dramatically as more time passes on the tarmac—a 42-fold increase in the odds of cancellation when a plane has been sitting on the tarmac for 61 to 120 minutes compared with a delay of 60 or less minutes in 2009. These data provided in table 15 also reveal that for every “time-on-the- tarmac” category, the odds of cancellation in 2010 exceeded the odds of cancellation in 2009 because all of the odds ratios (shown in the far right column) exceed 1. We calculated these odds ratios by taking the odds of cancellation in one tarmac time category in 2010 and dividing it by the odds of cancellation for the same tarmac time category in 2009. These data further show that the differential between the likelihood for cancellation in 2010 over 2009 rose the longer a flight was on the tarmac. While the odds of cancellation for flights on the tarmac for 60 or less minutes were 25 percent greater in 2010 than in 2009, for flights on the tarmac 121 to 180 minutes, there was a threefold greater odds of cancellation in 2010 compared to 2009. Figure 20 shows how the relative odds of flight cancellation in 2010 compared to 2009 increases the longer a flight sits on the tarmac. Finally, we calculated odds ratios to examine the relative odds of flight cancellations at the gate in 2010 and 2009. Table 16 shows the odds of cancellation each year and the odds ratio for gate cancellations in 2010 compared with 2009. The odds of a gate cancellation were 13 percent greater in 2010 compared with 2009. While the unadjusted odds ratios indicate that the likelihood of both tarmac and gate cancellations increased in May through September 2010 relative to the same time period in 2009, this increase may or may not be attributable to the tarmac delay rule in 2010. Many factors may contribute to flight cancellations, and there could be an observed difference across two years for a number of reasons. For example, weather events may disrupt traffic more in one year than in another, or airline scheduling or traffic patterns could change over time. To develop a model to examine this issue, it is helpful to first consider whether there is any reason why the tarmac rule might be correlated with flight cancellations. In particular, what is it about airline behavior that could be influenced by the tarmac delay rule? When there are flight disruptions, airlines face a trade-off between the consequences of delays they might incur and cancellations. For example, when bad weather reduces airport capacity, thus slowing the rate at which flights can take off or land at an airport, airlines must decide how to ration their traffic. They can choose to hold to their schedule and fly all their flights, but risk long delays. Alternatively, they can choose to cancel some of their flights, thus mitigating the capacity constraint they face and reducing the amount of delays for their remaining flights. Although airlines have some control over these trade-offs, airport capacity—both in gate space and on the tarmac—sometimes becomes so constrained that cancellations are unavoidable. In managing these circumstances, airlines attempt to minimize disruptions to passengers and costs to themselves. How an airline makes decisions within the context of this trade-off will vary among airlines depending on their business models and the particular situation at hand. The DOT’s tarmac delay rule requires airlines to limit the time flights spend on the tarmac to less than 3 hours or face the possibility of a substantial fine. Our hypothesis is that if the tarmac rule is associated with a greater incidence of flight cancellations, this may occur because the rule may have altered airlines’ calculus in analyzing the trade-off between delay and cancellation. According to airline representatives we spoke with, flights that sit on the tarmac for a significant period of time may have to return to the gate to avoid a fine and, because of crew hour limits or because of the severity of the underlying cause of the delay, these flights may be canceled. In addition, airline officials and aviation stakeholders told us that the rule has increased the likelihood that they will precancel a flight—that is, cancel a flight before it ever leaves the gate. First, if a flight is returning to the terminal to avoid a tarmac fine, a flight that has not yet left the gate might need to be canceled to free gate space for the returning flight. Airline officials also told us that they are precanceling more flights before their scheduled departure time when weather or other factors indicate that long tarmac delays are possible. We were also told by one airline official that precancelations may be preferable if a long tarmac delay seems likely because passengers are likely to have more rebooking options if they are precanceled than if they wait for some time on the tarmac and attempt to rebook later in the day. There are several limitations to this analysis. First, important factors related to cancellations may not be controlled for. For example, we do not have information on flights that were canceled for mechanical problems. This, along with other factors that might be relevant, could not be controlled for because we do not have adequate data to assess all factors that could be associated with cancellations. Also, the analysis provides a suggestion as to the factors that are correlated with cancellations, but does not necessarily suggest a causal relationship. To isolate the correlation between the rule and cancellations, as well as to better understand what other key factors are associated with the rate of cancellations, we developed two models to examine whether the rule may be correlated with a change in the incidence of flight cancellations. Because we are estimating the likelihood of a discreet event—whether a given flight is canceled—we applied a logistic regression (or logit method) for the estimation. This method enables us to assess how each of a set of independent factors correlates with the odds of a binary event—in this case the cancellation or noncancellation of an airline flight. We examined two contexts in which a flight may be canceled, after some time on the tarmac or at the gate. Tarmac-cancellation model. In the first model, we assessed whether flights that left the gate were more likely to be canceled during May through September 2010 than during the same time period in 2009. Although the tarmac rule considers fines only for flights on the tarmac more than 3 hours, our discussions with airline officials and experts suggested that airlines begin to assess the risk of tarmac violation well before prolonged tarmac delay begins. We grouped flights into hour-long categories based on the amount of time a flight sat on the tarmac in order to assess whether the length of time on the tarmac is associated with the odds of cancellation. For example, if a flight sat on the tarmac for 72 minutes, we placed it in the 60 to 121 minutes tarmac time category. For the tarmac-cancellation model, we assessed 3,715,219 flight records for the 10 months included in the model; 1,799 of these were ultimately canceled. Gate-cancellation model. In the second model, we examined whether flights were more likely to be precanceled after the rule went into effect. A precancellation occurs when a scheduled flight is canceled before it ever leaves the gate. Thus, even a flight that goes onto the tarmac but is later canceled is treated as a flight that was not canceled at the gate in this analysis. This model included 3,750,868 flight records, of which 35,649 were precanceled at the gate. Many factors affect the possibility of a flight’s cancellation and, therefore, we attempted to account for these other factors in the model. By controlling for this array of other influences on cancellations, the model is designed to determine whether the tarmac rule is independently correlated with the odds of a flight being canceled. Based on our research and discussions with airline representatives and academic experts, we identified factors that contribute to flight cancellations, including factors related to (1) the origin and destination airports, including circumstances at those airports at the time a flight is scheduled to depart, such as weather conditions; (2) characteristics of specific airlines and their operations; and (3) the scheduled city-pair route and the individual flight. Our hypothesis is that these same factors contribute to both types of cancellations, although we do not expect that the relationship between any of the factors and the odds of cancellation will necessarily be exactly the same in the two models. Each specific variable and its data source are discussed below. The primary source of data for the model is flight level data from BTS’s ASQP system. This system includes flight-level data, each record of which provides an array of information about a single-leg flight, such as the origin and destination airports, the date and time the flight was scheduled to depart and arrive, the airline, the taxi-out time, cause of any delay, and whether the flight was canceled. The BTS data form the level of observation for the models—which is a given flight—and data from other sources are merged into these observations. Airlines that account for at least 1 percent of total domestic scheduled passenger service revenue are required to file this flight information with BTS. Because this required filing leaves out smaller airlines, not all flights are included in the model. Moreover, our analysis includes data for 70 airports. The remainder of this section describes the rationale for including each of the variables in the model, how each is calculated, and the source of the data. The dependent variable. The dependent variable for the two models is a dummy variable—that is a variable that takes a value of one or zero depending on the presence or absence of some characteristic. For the tarmac-cancellation model this variable takes a value of one if a flight that left the gate returned to the gate after going onto the tarmac and then was canceled, and otherwise takes a value of zero. For the gate-cancellation model, the variable is set to one if the flight was canceled before taxiing out from the gate and is otherwise set to zero. Variable of interest: implementation of tarmac delay rule. In both models, we include data on flights from May through September in 2009 and for the same months in 2010. Since the rule went into effect in late April 2010, a dummy variable indicating whether a flight took place in 2010 is used. For the tarmac model we also include a set of dummy variables indicating how long a flight was on the tarmac before taking off or arriving back at the gate. We classify hour-long categories of tarmac time: 0 to 60 minutes, 61 to 120 minutes, 121 to 180 minutes, and more than 180 minutes, and include three dummy variables (using the 0 to 60 minute tarmac time as the reference category and therefore leaving it out of the regression) to test whether cancellations become more likely with longer tarmac times. Additionally, we multiply these three dummy variables by the dummy variable indicating whether the flight took place in 2009 or 2010. Creating such interactions allows the measured impact of the tarmac time on cancellation to be different before and after the implementation of the tarmac delay rule. Thus, we include six time-on- tarmac dummy variables in the tarmac delay model. Variables related to airports and conditions at airports. Several of the independent factors that might affect the odds of a flight being canceled are related to the airports at which a flight begins and ends and certain conditions at those airports:  Dummy variables for congested airports. It is well known that certain airports suffer more than others from congestion and delays. Because some airports have more delay-related issues, we believe that flights involving these airports may be more likely to be canceled, holding other factors constant. In a previous report we found that, according to FAA data, seven airports were the source of 80 percent of departure delays. Because this issue could affect flights both on the tarmac and at the gate, we use two dummy variables in both models to denote whether a flight either started or ended at one of these seven airports. We expect that flights involving these airports are more likely to be canceled.  Endpoint airport weather conditions. One of the factors likely to influence flight cancellations is the weather. Certain weather conditions can disrupt an airport’s realized capacity level and cause traffic to flow more slowly or even halt for a time. We obtained data from the FAA for the National Oceanic and Atmospheric Administration’s reporting of weather conditions for each hour at each of the airports included in our model. The data source provided information on the incidence of 32 types of weather conditions, such as fog, snow, thunderstorms, and hail. Additionally, FAA ranks each of the 32 weather conditions from 1 to 3 to indicate the impact of that particular weather condition on aviation activity. For example, thunderstorms can be highly disruptive to air traffic and is assigned a value of 3, while rain is assigned a value of 2, and haze a value of 1, indicating that haze usually presents only minor problems for air traffic. Using these data, we developed two variables to denote the occurrence of potentially disruptive weather conditions at the origin or the destination airport around the time a flight was scheduled for departure or arrival, respectively. In particular, to characterize the weather at the origin airport, we designated the hour of scheduled departure as the anchor timeframe, but we also took into account weather conditions in the hour before and the hour after the scheduled takeoff. This variable is set to 1 if a weather condition with a value of 3 (a significant weather condition) existed at the origin airport during the hour before, at the hour of, or the hour after the scheduled departure time. Similarly, the second variable is set to 1 if a weather event of value 3 occurred at the destination airport within the 3-hour window around the scheduled arrival time of the flight. Poor weather is expected to be associated with a greater likelihood of cancellation for flights already on the tarmac as well as those at the gate.  Ground delays and ground stops. This variable considers whether FAA has initiated programs to slow or stop traffic at an airport because of weather conditions, congestion, or some other reason. We obtained data on such programs—either ground stops or delays—at all the airports in our sample, by hour, across the 10 months of our analysis. Using the scheduled departure hour, we created two dummy variables that were set equal to 1 if the origin or destination airports, respectively, had any program in place to slow or stop traffic at the hour that a flight was scheduled to depart. We expected that flights affected by a ground stop or ground delay program would be associated with greater odds of a flight cancellation both on the tarmac and at the gate.  Airport on-time performance. A final measure that we included in the model to capture how well each airport is handling its scheduled traffic at a given point in time is the rate of the on-time performance. Data for this analysis come from the Aviation System Performance Metrics database maintained by FAA. We obtained on-time arrival and on- time departure performance information for the airports in the model by hour. In the model, a variable for the on-time departure performance for the origin airport is anchored at the hour of scheduled departure. Similarly, another variable is constructed for the on-time arrival performance at the destination airport. We expected that lower on-time performance measures would indicate difficulties in flowing the scheduled traffic and would thus be associated with a greater odds of flight cancellation. Variables related to airlines and their operations. Some factors that might be correlated with the odds of a flight cancellation are related to the airline that is operating the flight and how the flight fits into that airline’s network:  Size of airline. Certain airlines may be more inclined to cancel flights than other airlines. We separated airlines into three categories: the legacy airlines, which are typically the larger networked airlines; low- cost airlines, which include Southwest and AirTran; and the smaller airlines, such as regional airlines, that tend to fly shorter routes with smaller aircraft and often operate flights for legacy airlines. We did not include the third airline classification in the model; instead we use it as the reference category against which the other two categories of airlines are compared.  Airline hub. Many airlines operate a network through which particular airports—called hubs—are used for the transfer of traffic so that a larger number of routes can be served. Even though our model looks at the odds of cancellation for a single leg flight and we do not examine itineraries of more than one flight leg, an airline considers, when deciding whether to cancel a flight, how its flights are interrelated and how passengers transfer among them. If a flight takes off from an airport that is a hub for the airline operating that flight, we deemed this an origin/hub flight. Likewise, if a flight is destined to an airport that the carrier of record states is one of its hubs, we designated it as a destination/hub flight. If an airport is a hub for an airline, we expect this could affect the decision about whether to cancel a flight.  Average passengers per flight (on an airline-route basis). This variable is designed to take into account the likelihood that airlines will attempt to deliver as many passengers as possible to their destination and so might be more inclined to cancel flights with fewer passengers onboard when circumstances disrupt traffic flow. Because data were not available on the number of passengers onboard each particular flight, we used the average number of passengers for a particular airline on a given route over the course of a month, divided by 10. Thus, the results indicate the change in the odds of cancellation for each additional 10 passengers on a given airline’s flight for that route. Variable related to the route and flight. The following variables provide information about the origin-to-destination route and the specific flight.  Route distance. Some past research has shown that airlines are less likely to cancel longer distance flights. We placed routes in four categories according to distance: less than 500 miles, and three categories that were more than 750 miles. We did not include the flights that fell into the 500 to 750 distance because it is the reference category that other distance dummy variables are compared to.  Day of the week. Since traffic patterns vary across the days of the week, particularly weekdays versus weekend days, we included a dummy variable for flights that took place on the weekend. We expected that weekend flights will be canceled less often than weekday flights because less traffic is scheduled on the weekend making a given set of circumstances on the weekend less likely to disrupt traffic on these days.  Scheduled departure hour. Airlines may be more or less reluctant to cancel flights at certain times of the day than at other times. For example, canceling early flights may be less problematic because there will be more options for rebooking passenger that day than there would be later in the day. Additionally, airlines may need to consider where an aircraft ends the day in preparation for the next day’s traffic, and so may prefer not to cancel flights late in the day. We created four categories for departure hours: overnight, morning, afternoon, or evening. The afternoon category is not included in the model because it is the reference group we compare the three other dummy variables against. Table 17 provides information on the source of data for each of the variables. This section provides results for both the tarmac-cancellation and gate- cancellation models. We used output from the logistic regression model for the rule change dummy variable and the six dummy variables related to time on the tarmac to ascertain the relative odds of flight cancellations before and after the implementation of the tarmac rule. Table 18 shows, based on the model that controlled for other factors, how the odds of cancellation in each tarmac time category in 2010 compared with the odds of cancellation for the same tarmac time in 2009—specifically, we show the ratio of those odds. In all hour categories of tarmac time, the odds of cancellation were greater in 2010 than in 2009 because all of the odds ratios exceed 1. Moreover, the differential in the odds ratio of cancellations across the 2 years increased with the time a flight was on the tarmac. For flights that were on the tarmac for less than an hour, the odds of a cancellation were about one-third higher in 2010 than in 2009. But the longer a flight remained on the tarmac the more the relative odds of cancellation were greater in 2010 than in 2009. For flights with 61 to 120 minutes of tarmac delay, the odds ratio rose to 2.14, indicating that the odds of a cancellation more than doubled in 2010 compared with 2009, and for flights with 121 to 180 minutes of tarmac delay, the odds of cancellation more than tripled in that same time period. Finally, the odds ratios in table 18 are very similar to those presented in table 15, indicating that the inclusion of key variables to control for other factors did not have much effect on our findings related to the tarmac rule. Table 19 provides the odds ratios from the logistic regression model for all other variables included in the tarmac-cancellation model. Some of the key findings are:  Flights departing from or destined to an airline’s hub airport are less likely to be canceled.  Flights in evening hours are less likely to be canceled than flights departing in the afternoon.  Flights of greater than 750 miles are less likely to be canceled than flights of 500 to 750 miles.  Flights are more likely to be canceled if the departure airport or arrival airport is experiencing severe weather at or around the time of scheduled departure or arrival, respectively.  Flights are more likely to be canceled if a ground stop or ground delay was in effect at either the departure airport or the arrival airport at the scheduled time of departure. Table 20 provides the findings for the gate-cancellation model, which assesses the likelihood of precancellations, adjusted to account for factors other than the tarmac delay rule that may influence the incidence of cancellation. One significant finding is that the odds ratio for the rule change is substantially greater, when adjusted, than indicated by the simple unadjusted odds ratio shown in table 16. The model results indicate that the odds of gate cancellations rose by 24 percent after the rule went into effect, whereas the simple result indicated only a 13 percent increase in those odds. This suggests that to understand the independent correlation between the tarmac delay rule and likelihood of gate cancellation, it is important to control for the other factors that are likely correlated with such cancellations. Findings from the gate-cancellation model suggest:  Gate cancellations are more common when a flight is departing from or destined to one of the seven most congested airports in the U.S.  Gate cancellations are less common for flights scheduled to depart in the evening, compared to flights departing in the afternoon.  Gate cancellations are more common when severe weather is affecting either endpoint airport of a flight at the relevant hour.  Gate cancellations are more common for very short flights, compared to flights of 500 to 750 miles in distance.  Gate cancellations are less common for flights of a more than 750 miles, compared to flights of 500 to 750 miles.  Gate cancellations are more common if a ground delay or ground stop was in place at the origin or destination airport at the time of scheduled departure. In this case it appears that flights to an airline’s hub airport are more likely to be canceled. We ran the models using several other specifications, most of which involved alternative variable specifications. These runs indicated that our findings for the tarmac rule were robust across these specifications. Alternatives included the following:  Variations on how the specific airlines were grouped. In the base case we classified airlines into three categories: legacy airline, low-cost airline, and all others. In an alternative specification, we classified airline as large or small based on the number of enplanements.  Variations for characterization of origin and destination airports. In the base case models, we included dummy variables to indicate that the airport (origin or destination) was one of the seven most congested airports in the United States. In an alternative specification, we used 62 dummy variables to indicate whether the airport (origin or destination), was one of the 31 largest airports.  Alternative measure for poor weather conditions. In the base case, we classified weather at endpoint airports as severe if a weather event occurring around the time of the flight would be considered highly disruptive to aviation activity. In an alternative specification we included both severe and moderately disruptive weather conditions.  Alternative distance measure. In the base case, we classified distance into broad mileage categories. In an alternative specification, we entered distance divided by 100 as a continuous variable.  Elimination of flights that were canceled after a tarmac delay for the gate model. For the gate model, we included flights that left the gate, even if they were later canceled. We did so because the airlines were attempting to get these flights off the ground when they were making gate-cancellation decisions, and so we treated these flights as nongate cancellations. In one sensitivity run, we eliminated any flights that left the gate but were later canceled. In addition to the contact named above, Paul Aussendorf, Assistant Director; Amy Abramowitz; Kyle Browning; Lauren Calhoun; Anne Doré; Grant Mallie; Michael Mgebroff; Sara Ann Moessbauer; Josh Ormond; and Melissa Swearingen made key contributions to this report.
Flight delays and cancellations are disruptive and costly for passengers, airlines, and the economy. Long tarmac delays have created hardships for some passengers. To enhance passenger protections in the event of flight disruptions, the U.S. Department of Transportation (DOT) recently introduced passenger protection regulations, including a rule that took effect in April 2010 designed to prevent tarmac delays more than 3 hours (the tarmac delay rule), as well as other efforts to improve passenger welfare. As requested, this report addresses (1) whether flight delays and cancellations differ by community size; (2) how DOT's tarmac delay rule has affected passengers and airlines; and (3) how passenger protection requirements in the United States, Canada, and the European Union (EU) affect passengers and airlines. GAO analyzed DOT data, including through the use of regression models, as well as data from FlightStats, a private source of flight performance information. GAO also reviewed documents and interviewed government, airline, and consumer group officials in the United States, Canada, and the EU. Airports in rural communities have higher rates of delays and cancellations than airports in larger communities, but DOT data provide an incomplete picture of this difference. DOT's data include flights operated by the largest airlines, representing about 70 percent of all scheduled flights. GAO analysis of FlightStats data, representing about 98 percent of all scheduled flights, show more substantial differences in flight performance trends by community size than DOT data. DOT has historically not collected data from smaller airlines because of the burden it could impose on these airlines, but without this information, DOT cannot fully achieve the purpose of providing consumers with information on airlines' quality of service. DOT's tarmac delay rule has nearly eliminated tarmac delays of more than 3 hours (180 minutes), declining from 693 to 20 incidents in the 12 months following the introduction of the rule in April 2010. While this has reduced the hardship of long on-board delays for some passengers, GAO analysis suggests the rule is also correlated with a greater likelihood of flight cancellations. Such cancellations can lead to long overall passenger travel times. Airlines and other aviation stakeholders maintain that the tarmac delay rule has changed airline decision-making in ways that could make cancellations more likely. To test this claim, GAO developed two regression models, which controlled for a variety of factors that can cause cancellations and measured whether the time period following the imposition of the tarmac delay rule is correlated with an increase in cancellations. The two models assessed flights canceled before and after leaving the gate, for the same 5 months (May through September) in 2009 and 2010. In both cases, GAO found that there was an increased likelihood of cancellation in 2010 compared to 2009. EU requirements provide airline passengers with more extensive protections, such as care and compensation, for flight delays, cancellations, and denied boardings than do U.S. or Canadian requirements. But these protections may also increase costs for airlines and passengers. For example, some airline officials in the United States and the EU told GAO that increases in the amount of denied boarding compensation has increased their overall costs. Additionally, enhanced passenger protections, such as those in the EU, can create enforcement challenges if regulations are unclear or not universally enforced. GAO recommends that DOT collect and publicize more comprehensive data on airlines' on-time performance and assess the full range of the tarmac delay rule's costs and benefits and, if warranted, refine the rule's requirements and implementation. DOT did not comment directly on the recommendations, but indicated that it would soon begin a study of the effect of the tarmac delay rule.
SIPC’s mission is to promote confidence in securities markets by seeking to return customers’ cash and securities when a broker-dealer fails. SIPC provides advances for these customers up to the SIPA protection limits— $500,000 per customer, except that claims for cash are limited to $250,000 per customer. SIPA established a fund (SIPC fund) to pay for SIPC’s operations and activities. SIPC uses the fund to make advances to satisfy customer claims for missing cash and securities, including notes, stocks, bonds, and certificates of deposit. The SIPC fund also covers the administrative expenses of a liquidation proceeding (including costs incurred by a trustee, trustee’s counsel, and other advisors) when the general estate of the failed firm is insufficient. SIPC finances the fund through annual assessments it sets for member firms, plus interest generated from its investments in Treasury notes. If the SIPC fund becomes, or appears to be, insufficient to carry out the purposes of SIPA, SIPC can borrow up to $2.5 billion from Treasury through SEC. That is, SEC would borrow the funds from Treasury and relend them to SIPC. According to SIPC senior management, recent demands on the fund, including from the Madoff case, together with a change in SIPC bylaws that increased the target size of the fund from $1 billion to $2.5 billion, led SIPC to impose new industry assessments totaling about $400 million annually. The assessments, equal to one- quarter of 1 percent of net operating revenue, will continue until the $2.5 billion target is reached, according to SIPC senior management. The new assessments replaced a flat annual assessment of $150 per member firm.$91,755 per firm, with a median of $2,095, according to SIPC. SIPA authorizes SIPC to begin a liquidation action by applying for a protective order from an appropriate federal district court if it determines that one of SIPC’s member broker-dealers has failed or is in danger of failing to meet its obligations to customers and one or more additional statutory conditions are met.protective order application. If the court issues the order, the court appoints a trustee selected by SIPC, or, in certain cases, SIPC itself, to liquidate the firm. While SIPC designates the trustee, that person, once judicially appointed, becomes an officer of the court. As such, the trustee exercises independent judgment and does not serve as an agent of SIPC. The broker-dealer can contest the Under SIPA, the trustee must investigate facts and circumstances relating to the liquidation; report to the court facts indicating fraud, misconduct, mismanagement, or irregularities; and submit a final report to SIPC and others designated by the court. Also, the trustee is to periodically report to the court and SIPC on his or her progress in distributing cash and securities to customers. To the extent that it is consistent with SIPA, the proceeding is conducted pursuant to provisions of the Bankruptcy Code. Promptly after being appointed, the trustee must publish a notice of the proceeding in one or more major newspapers, in a form and manner determined by the court. The trustee also must see that a copy of the notice is mailed to existing and recent customers listed on the broker- dealer’s books and records, and provide notice to creditors in the manner SIPA prescribes. Customers must file written statements of claims. The trustee’s notice includes a claim form, and also informs customers how to file claims and explains deadlines. Once filed, the claims undergo various reviews, according to the Trustee. First, the Trustee’s claims agent reviews claims for completeness; if information is found to be missing, the claims agent sends a request for additional information. Second, the Trustee’s forensic accountants review each claim form, information from the Madoff firm’s records about the account at issue, and information submitted directly by the claimant. The Trustee uses the results of this review in assessing his determination of the claim. Finally, claims move to SIPC, where a claims review specialist provides a recommendation to the Trustee on how each claim should be determined. Once that recommendation has been made, the Trustee and trustee’s counsel review it, as well as legal or other issues raised previously. When the Trustee has decided on resolution of a claim, he issues a determination letter to the claimant. As of the start of 2012, the Trustee had received 16,519 customer claims in the Madoff proceeding, and reached determinations on all but two of them. According to SIPC, many Madoff customers were older. For example, according to Trustee information we reviewed on his hardship program (described later in this report), more than half of applicants were age 71 or older. In a liquidation under SIPA, amounts in the customer property fund generally are distributed to the failed firm’s customers according to the value of their account holdings, or “net equity.” SIPA generally provides the net equity amount is what would have been owed to the customer if the broker-dealer had liquidated all their “securities positions,” less any obligations of the customer to the firm. In the Madoff case, if the Trustee recovers less than the total amount of allowed claims, some claimants likely will receive only a portion of their allowed claims. The Trustee told us his goal is to recover the full amount, but that is not likely, given developments in litigation and decisions to settle cases. In SIPA liquidations not involving fraud, trustees typically determine that the amounts owed to customers match the amounts shown on their final statements, in what is known as the “final statement method” (FSM). However, in cases involving fraud, amounts in customer accounts may not correspond to statement amounts. In the Madoff case, the Trustee determined that the securities positions shown on customer statements were fictitious. As a result, supported by SIPC and SEC, he decided to value each customer’s net equity according to the amount of cash deposited less any amounts withdrawn—a method known as the “net investment method” (NIM). Under NIM, Madoff claimants generally divide into two categories: “net winners,” who have withdrawn more than the amount they invested with the Madoff firm, and “net losers,” who have withdrawn less than they invested. Some customers challenged the Trustee’s decision on valuing customer net equity, but two courts have considered the issue—the U.S. Bankruptcy Court for the Southern District of New York and the U.S. Court of Appeals for the Second Circuit—and each has affirmed the Trustee’s decision to use NIM. In June 2012, the U.S. Supreme Court declined to hear an appeal on the issue, thus concluding legal challenges to the Trustee’s decision. The Trustee has taken various steps to recover assets for distribution to former Madoff customers, including recovery of bank account balances and sale of the firm’s assets. In addition, the Trustee has filed hundreds of lawsuits known as “avoidance actions” or “clawbacks.” Avoidance powers enable a trustee to “avoid,” or set aside, certain transfers made by a debtor—here, the Madoff firm—prior to the bankruptcy filing, in order to recover transferred funds for the benefit of the estate. In pursuing these actions, a trustee can generally seek return of fictitious profits paid to investors, and in some cases, principal amounts withdrawn, for specified periods of time—90 days, 2 years, and 6 years preceding the filing. In doing so, the Trustee has available state statutes, common law claims, and federal bankruptcy law upon which to build his cases. Actions also can vary according to whether the Trustee alleges a customer acted in good or bad faith. Under the Bankruptcy Code, a recipient’s good faith or bad faith is not relevant to whether the transfer is avoidable but does affect the extent of the recipient’s liability. Table 1 summarizes the legal avenues available. A fuller discussion of legal remedies available can be found in appendix II. Generally, transfers are avoidable as actually fraudulent if the debtor- transferor had intent to defraud; or, as constructively fraudulent if they were made without fraudulent intent but for less than equivalent or fair value, while the debtor was insolvent. Courts generally presume Ponzi scheme payouts to be actually fraudulent. As for constructive fraud, Ponzi scheme transfers in excess of principal are not made for value—meaning they are not paid to satisfy any legitimate obligation the debtor owed to the recipient—and thus fall within the constructive fraud provisions. In our analysis of information obtained from the Trustee, we identified 7,994 accounts with at least one transaction. We examined names on each of these accounts, to determine whether individuals and families (to which we refer jointly as individuals) or institutions held the accounts. We found that individuals held more than three-fourths (77 percent) of accounts, while almost one-quarter (23 percent) of accounts were held by institutions, such as charities, pension funds, and feeder funds. Using these groupings, we further examined the account holders’ claims outcomes—whether they were net winners or net losers—and the pattern of their transactions leading up to the Madoff firm’s collapse. The Trustee, however, noted that accounts held by institutions generally represented funds from individuals as well, because with the exception of funds from nonprofit organizations, the institutions were investing money on behalf of individuals. The Trustee said that based on his examination, he was not aware of any direct corporate investment in the Madoff firm. As shown in table 2, our analysis indicates that a higher proportion of accounts held by individuals (60 percent) were net winners that had withdrawn more than they had deposited over the lifetime of their accounts, compared to accounts held by institutions (50 percent). We also found that more institutional accounts (40 percent) were net losers than were individual accounts (29 percent). Among individual accounts, net winners outnumbered net losers by 2-to-1, with outcomes split more evenly for institutional accounts. Overall, for both categories combined, we found that 57 percent of account holders were net winners. About one-third (32 percent) were net losers, whose total withdrawals were less than their total investment deposits. The remaining 11 percent of accounts had zero balances from a net investment perspective, with account holders having withdrawn exactly what they had invested. Table 2 also shows that as a group, institutional accounts lost principal amounts they invested in the fraud, while accounts owned by individuals in the aggregate withdrew more than they invested. For individuals, the total net investment position at the time of the Madoff firm’s failure was ($767 million), meaning they had withdrawn more money than they originally invested. As a result, individual account holders as a group were net winners. By contrast, the total net investment position for institutional accounts as a group showed that they were net losers, having made nearly $3.0 billion more in deposits than withdrawals. We note that while we divided the population into account types for analytical purposes, the Trustee and SIPC executives stressed that each customer claim was determined on a case-by-case basis without regard to whether the account holder was an individual or institution. In addition, SIPC executives noted that notwithstanding the overall totals for the two customer categories, there were nevertheless both net winners and net losers in each grouping. Our analysis found that individual and institutional accounts had similar deposit and withdrawal activity throughout the 27-year period we examined, including during the period immediately before the failure in 2008. For both groups, total annual deposits and total annual withdrawals increased steadily since 1981, growing by greater amounts in more recent years. Overall, deposit volume more than doubled between 2005 and 2007, rising from $4.5 billion to $9.4 billion annually. Withdrawal volume grew more slowly from 2005 to 2007, rising only about 14 percent, from $5.7 billion to $6.4 billion. However, withdrawal volume in 2008, at $12.6 billion, was almost double the volume in 2007. Figure 1 breaks out total deposit and withdrawal activity by individual and institutional accounts. As noted, the pattern of activity is similar for each group, except that deposit volume for individual accounts was about the same in 2007 and 2008, at $3.4 billion annually, while deposit volume from institutional accounts fell between 2007 and 2008, from $6.0 billion to $5.1 billion. Our analysis also showed that in the final year of the Madoff firm’s existence (2008), both individuals and institutional account holders withdrew large amounts of principal they had invested. Figure 2 shows such principal withdrawals, excluding any fictitious profits, during quarterly periods leading up to the failure. Withdrawals of principal began to increase three quarters before the failure, with most of the increase in withdrawals occurring in the 90 days before the Madoff firm’s failure. Under his clawback authority, described earlier, the Trustee can, among other things, seek to recover principal withdrawals during this period. Accounts that had large withdrawals, particularly those made just before the Madoff firm’s collapse, could suggest that such customers knew of the fraudulent scheme and were attempting to avoid suffering losses when the firm failed. Although the increase in withdrawals we identified during the period preceding collapse could suggest some customers anticipated the firm’s failure, this activity may also have been due to investor reactions to the financial crisis, which was peaking at the time, or to other factors affecting where investors place their money. As we describe later, the Trustee has filed extensive litigation to recover withdrawals that many customers made. In appendix III, we show other results of our analysis, listing the largest Madoff accounts by transaction volume, total withdrawals, and net winnings. As required by SIPA, the Trustee solicited claims from Madoff customers for reimbursement of their losses, with approved claimants eligible to receive a share of any cash or securities Madoff held on behalf of his customers, plus assets recovered by the Trustee during the liquidation proceeding. According to claims data provided by the Trustee, a total of 16,519 claims were filed. As shown in figure 3, most of the claims were denied. Sixty-six percent of the claims were denied because the filers had not invested directly with the Madoff firm themselves (referred to as “third party” denials). Instead, they had invested in feeder funds or other vehicles that owned the accounts at the firm. The Trustee determined that under SIPA, only those who had invested directly with the Madoff firm were customers for claims purposes. Among 5,543 claims from direct investors remaining after denial of the third party claims, the Trustee denied 2,703 claims (16 percent of all claims). Almost all of the denied claims were from net winners, meaning that they had withdrawn more money than they invested. The Trustee allowed 2,425 claims (15 percent of all claims), totaling $7.3 billion. Of the allowed claims, the majority were filed by net losers, meaning they had withdrawn less from their account than they had invested. Figure 4 details the Trustee’s disposition of claims, by net winner and loser status. However, there were exceptions to the general pattern of approvals and denials by net investment status. For example, 10 (less than 1 percent of all claims) net winner claims were allowed. According to the Trustee, in some cases, these claims were allowed because the account holders repaid certain withdrawals made from their accounts, and when they returned those funds, they became eligible for an allowed claim.instances, net winner claims were allowed when combined with net loser account(s) held by the same party. The Trustee also denied seven net loser claims, most often because the account was combined with net winner account(s) held by the same party. The Madoff Trustee is pursuing various litigation to recover assets from customers and others that can be used to reimburse those customers that have allowable claims under SIPA. For those customers that withdrew fictitious profits in excess of their investments—net winners—the Trustee is pursuing more than a thousand lawsuits to recover these funds as allowed under federal bankruptcy law and state law. The Trustee is also suing some individuals and entities that he argues knew or should have known about the fraud and from whom he is seeking to recover more than just fictitious profits. In addition, the Trustee has filed other actions involving feeder funds. Through such efforts, the Trustee has obtained billions of dollars in settlement agreements with customers that either faced the possibility of litigation or had been sued already. However, the Trustee established a hardship program, in which he expedited claims processing or declined to pursue litigation for individuals who could demonstrate financial distress. As discussed earlier, various laws grant authority to the Trustee to seek return of funds paid out by the Madoff firm. This includes federal bankruptcy and state laws, which allow actions to return transfers from the failed entity—the debtor—made in different periods of time, including within 90 days, 2 years, or 6 years prior to the bankruptcy filing. In deciding whether to bring an action, the Trustee told us that he has generally considered the same factors as would apply in a typical bankruptcy case or those that a private plaintiff would also likely consider. In particular, the Trustee said he has considered the costs and benefits of taking the action—that is, how much could be recovered and at what cost—as well as prospects for success and potential legal barriers, such as the statute of limitations. While the overall goal of an action is to recover a reasonable amount for the customer property fund, the Trustee also told us that when filing an action, he might not seek all possible assets. Many former account holders were older and renting their homes, he said. At the same time, some customers have assets that are protected against judgments, such as homes or pension assets, which reduces the amount of assets he can pursue. The Trustee told us that while the amount of potential recoveries is important, he has not established any minimum amount in deciding whether to sue. The underlying decision on whether to sue, he said, is the acknowledgment that each time he declines to pursue an action, he allows a customer who withdrew more than their principal invested to keep money taken from others who had not recovered as much as they had invested. According to information the Trustee provided, his litigation has targeted a high portion of amounts withdrawn from the firm during the periods preceding failure for which he is legally authorized to pursue recoveries. For example, the total amount withdrawn from the Madoff firm during the 90-day period under which federal bankruptcy law allows recoveries was $5.5 billion, he said. Of that, his lawsuits have sought $5.2 billion, or 95 percent. The total amount withdrawn under the 2- and 6-year recovery periods allowed by federal and state law was $9.7 billion. Of that, the Trustee has sought $8.4 billion, or 87 percent, he said. According to the Trustee, the differences between amounts withdrawn and sought are based on consideration of the factors whether to sue as noted earlier, as well as having reached settlements to return funds prior to filing of an action. According to our review, the Trustee has filed 1,002 actions seeking to recover $3.5 billion from Madoff customers that were net winners but that are not alleged to have had knowledge of the fraud or been in a position to know about it—referred to by the Trustee as “good faith” defendants. According to the Trustee, the good faith designation indicates that while defendants profited from the fraud, he did not have evidence they had knowledge of the scheme or were in a position to know about it. When bad faith is not alleged, the laws providing the Trustee with the ability to recover funds allow him to seek principal withdrawn during the 90 days preceding the Madoff firm’s failure—the preference period—plus fictitious profits withdrawn in the 6 years prior to the failure. The Trustee told us that although he anticipated negative publicity from filing suits against customers who did not know about the fraud, the $3.5 billion at issue in the good faith cases was an amount too large to ignore.faith cases, the amounts sought by the Trustee range from a low of $33,000 to a high of $152 million. The average amount sought was $3.5 million, with the median at $1.4 million. Some of the good faith cases have settled, but about 88 percent of complaints remained in litigation or were on appeal as of May 2012, according to information we obtained from the Trustee. Table 3 summarizes the status of the good faith cases. In general, most good faith cases are proceeding slowly, with the Trustee seeking to mediate and settle them, he told us. We examined a random sample of 50 good faith cases and found them to be similarly structured and generally citing the same federal and state laws as grounds for recovery. The good faith actions generally include the name of the defendants, their account numbers, the amounts sought, and the legal basis on which the Trustee relied to seek recovery. In several filings, the Trustee also included an accounting of deposits and withdrawals. In contrast to a bad faith action, the good faith complaints we reviewed generally do not include narrative details of the defendant’s history or relation to the Madoff firm. Table 4 summarizes the 10 largest actions in our sample, by amount sought. The Trustee has pursued a variety of legal approaches in the cases we sampled. Thirty-eight cases cited the 2-year period provided in the federal bankruptcy statute, while 48 cases cited the 6-year period. In six of the cases, the Trustee sought to recover transfers from the 90-day preference period. In 20 of the 50 cases, the Trustee also sued to recover funds he alleged were transferred from defendants to third parties, known as “subsequent transferees.” For instance, in one case, the Trustee alleged that an individual account holder transferred to a relative some or all of $1.2 million in fictitious profits withdrawn. Additionally, in five cases, the Trustee has also sought to temporarily disallow customer claims filed by defendants. For example, one limited liability company filed a claim for SIPC coverage in May 2009, and the Trustee filed suit to recover funds in December 2010. As part of the action, the Trustee sought to disallow the SIPC claim until $4 million was recovered for the Madoff estate. In one good faith case in our sample, the defendant was Madoff’s nephew, considered an insider, and had received fraudulent transfers, but was not deemed to have acted in bad faith. Specifically, according to the Trustee, the nephew worked full-time at the Madoff firm beginning in 1980, most recently as director of administration. The Trustee alleged he received preference transfers, fictitious profits, and improperly used Madoff firm funds to pay for personal expenses. However, the Trustee told us his investigation indicated the relative was not likely aware of the fraud. In addition to the good faith lawsuits, the trustee has also filed 30 actions against individuals or entities in which he has alleged the defendants acted in bad faith because they either knew, or should have known, of Madoff’s fraudulent investment scheme. According to the Trustee, asserting that a defendant acted in bad faith allows him to potentially recover greater amounts than in other actions. For example, in a bad faith action, a trustee can seek to recover not only fictitious profits, but also seek principal amounts invested as well. Additionally, while New York debtor and creditor law generally limits to 6 years the period for which a trustee can seek recovery from parties that received funds from a bankrupt entity, a trustee can sue a customer that acted in bad faith in earlier years, if it can be shown the defendant knew or should have known of the fraud and certain other conditions are met. The trustee can also pursue common law claims such as conversion and unjust enrichment against bad faith defendants that received other funds from the bankrupt entity, such as receiving cash or purchases of goods on their behalf.damages to recover sums allegedly received improperly by the defendant. However, although the Trustee has the authority and standing to enforce common law claims, two recent judicial decisions dealing Common law claims allow the Trustee to seek compensatory specifically with the Madoff liquidation limited his ability to pursue common law claims. Our review of the 30 bad faith actions showed that the complaints list several hundred defendants and seek $11 billion overall, with demand amounts ranging from more than $500,000 to approximately $6.7 billion. Defendants include Madoff family members, employees of the Madoff firm, individuals who identified investors for Madoff, and other business associates of Madoff. In addition to suing defendants as individuals, the complaints also seek to recover funds from vehicles these individuals used to invest in the Madoff scheme, including corporations, limited partnerships, trusts, estates, partnerships, foundations, and profit-sharing plans. Additionally, the Trustee is suing individual retirement accounts, to recover funds allegedly received from another defendant. According to the Trustee, his objective in alleging bad faith, even for some net loser defendants, is to maximize recoveries for distribution to other harmed customers. Twenty-nine of the 30 bad faith complaints are available publicly, either through the U.S. Courts’ Public Access to Court Electronic Records (PACER) system or the trustee’s website, http://www.madofftrustee.com. One bad faith complaint is under seal and unavailable. According to the trustee, this complaint involves a husband and wife as defendants and is sealed because it includes allegations about the defendants’ federal income taxes. that could have revealed the Ponzi scheme. In some instances, the Trustee alleges that defendants were also sophisticated investors with sufficient education or specialized work experience in securities law, accounting, or finance that they should have known their returns were unusual. Table 5 shows the most frequently cited bases on which the Trustee alleged bad faith, according to our analysis of available cases. The Trustee told us that initially, he believed that anyone working in Madoff’s investment advisory unit would be a bad faith participant. However, through additional investigation, including document examination, taking of depositions, and reconstruction of computer records, the Trustee said he determined not all such employees may have been aware of the fraud. For example, the Trustee initially believed Madoff’s secretary was involved in the fraud, given her position close to Madoff, but later determined she did not have knowledge of the fraudulent activities. In several cases, the Trustee is alleging bad faith against Madoff relatives who worked at the firm, or other, nonemployee relatives who received funds. For example, the Trustee sued Madoff’s wife to recover funds transferred from the firm that were used to buy a yacht for approximately $2.8 million, to pay off a $1.1 million credit card balance, or to fund other investments. The Trustee sued the wife of Madoff’s brother, who worked at the firm, for receiving $1.5 million in purported salary from 1996 to 2008, after the Trustee’s investigation concluded she never performed any work for the position. Among defendants who were not employees or relatives are individuals who allegedly identified and recruited investors for the Madoff firm. For example, the Trustee alleges an accounting firm pooled hundreds of millions of dollars for investment while keeping tens of millions of dollars itself. Although many Madoff customers typically experienced outsized returns based on market developments, some bad faith defendants received exceptionally high returns, the Trustee alleges. In one case, for example, the Trustee cites returns as high as 175 percent. Moreover, in some cases, the bad faith defendants include large investors for whom the trustee alleges their financial or business sophistication provided them with the ability to realize that they were benefiting from a fraud. For example, the Trustee alleges one set of defendants is a closely held family business that was also an investor in another investment fraud that was so similar to the Madoff firm it should have been clear Madoff was also running a fraud. In his complaint, the Trustee quotes an employee of the business as saying shortly after Madoff’s arrest, “Our CIO [chief information officer] always said it was a scam, ‘too good to be true’” Another bad faith defendant, according to the Trustee, had been closely associated with Madoff professionally and socially for decades, investing in the firm through more than 60 entity and personal accounts. According to the Trustee, some of the accounts reported consistently high annual returns between 20 percent and 24 percent, with only 3 months of negative returns over 12 years. Other accounts of the defendant sometimes experienced returns greater than 100 percent or even 300 percent, he alleges. According to the Trustee, the defendant acted as an investment advisor, and thus should have known such returns were not likely possible without fraud. As shown in table 6, the 10 largest bad faith complaints, measured by amount sought, have sought a total of $10.7 billion, or more than 97 percent of the $11.0 billion the Trustee is seeking in all 30 bad faith litigations. As of April 2012, the Trustee had obtained settlements, or a partial settlement, in four of these cases. As noted, feeder funds are investment vehicles that collected funds from investors and then channeled the money to the Madoff firm. The Trustee has filed 27 bad faith actions against feeder fund defendants, seeking nearly $100 billion. The amounts sought range from a low of $10.4 million to a high of $58.5 billion. The average amount sought is $3.7 billion, with a median of $182.4 million. The amounts the Trustee is seeking include fictitious profits, principal amounts invested, fees, interest, and in some cases, punitive damages. Typically, the Madoff feeder funds raised money from high-net-worth individuals in operations that spanned the globe. In addition to the feeder funds themselves, banks and other financial institutions are among the defendants in these cases. For example, some were custodians of the feeder funds, some were administrative agents, and some were involved in marketing the funds to prospective investors. Some banks and financial institutions had more than one role in their involvement with feeder funds. According to the Trustee, feeder funds account for about $14.2 billion of the approximately $19.6 billion in total principal lost by all customers. In suing for bad faith, the Trustee has alleged these defendants either knew or should have known of the Madoff fraud. In some of the feeder fund cases, the Trustee is also alleging participation in, and concealment of, the fraud. With the bad faith allegations, as with bad faith cases filed against individuals, the Trustee can sue to recover funds beyond principal withdrawn in the preference period and fictitious profits withdrawn in the 6-year period. In addition, he can use common law grounds to seek additional damages based on alleged harm caused by parties aiding the fraud. As a result, the total he is seeking in the six largest feeder fund actions is $94 billion, which greatly exceeds the amount of principal these entities invested into the scheme. Table 7 summarizes these cases, which represent nearly 95 percent of the total amount sought in feeder fund cases. The Trustee told us that compared to other litigation he is pursuing, feeder fund actions require different types of evidence, due to the nature of their operations. For example, feeder fund managers benefited through investment and management fees charged by the funds, which were subsequently paid to individuals as salaries and bonuses. The Trustee considers these payments to be fraudulent. In the Tremont case, for instance, the firm managed five funds that invested directly with the Madoff firm, as well as more than a dozen other funds that were indirect Madoff investors, via investments in the directly invested funds. The Trustee alleged that in the 6 years before the Madoff firm failed, Tremont defendants received more than $180 million in management, administration, and other fees; bonuses; profits; compensation; dividends; and partnership distributions. Additionally, a number of the feeder fund defendants were net losers. In such cases, the Trustee also sought to defer SIPA claims they filed pending resolution of the Trustee’s actions against them. In general, given the complexities of feeder fund relationships, the Trustee determined amounts sought by looking at principal, not fictitious profits, he told us. As shown in table 7, the Trustee has alleged that banks, financial services entities, and related individuals facilitated the Madoff fraud. For example, HSBC served as marketer, custodian, and administrator for numerous feeder funds. The Trustee alleged that HSBC surrendered all custodial duties to the Madoff firm, while continuing to collect fees, and without any disclosure to investors. The Trustee stated that this surrender removed a system of checks and balances and allowed the Madoff firm to assert the existence of assets and trades that never existed. As for marketing feeder funds, the Trustee maintains that HSBC acquiesced to Madoff’s demands to keep his name out of offering documents, despite the bank’s own concerns about its inability to conduct proper due diligence on Greenwich Sentry, a Madoff feeder fund. Feeder funds also worked with banks to create derivative products based on feeder fund returns, according to the Trustee. For example, investors in these “leveraged notes” would be entitled to receive returns based on a feeder fund’s returns, while a financial institution, usually a bank, would receive fees for structuring the notes and interest for lending funds used as part of the investment. Concurrently, the bank would purchase shares in the feeder fund in order to hedge its exposure in the leveraged notes. The end result, according to the Trustee, was that hundreds of millions of additional dollars were invested into the Madoff operation. In bringing the feeder fund actions, the Trustee has made a number of specific allegations to support bad faith and complaints of illicit funds received. While particular allegations vary across cases, we found the six cases we examined shared three major elements. First, defendants are alleged to have profited from the fraud primarily through fees received. For example, UBS is alleged to have received fees for purportedly serving in custodial and asset management functions for the feeder funds, Luxalpha and Groupement Financier. UBS sponsored the formation of Luxalpha and served as prime banker for Groupement Financier. The Trustee considers the fees UBS derived to be customer property that should be recovered. Second, the Trustee alleges defendants breached their duty of due diligence to their customers. For example, the Trustee alleged that Fairfield Greenwich Group, Madoff’s largest feeder fund group, did not “properly, independently, and reasonably perform due diligence into the many red flags strongly indicating Madoff was a fraud.” Third, the defendants are alleged to have either aided and abetted or actively participated in the fraud. In the JP Morgan Chase case, the Trustee alleged that through its interactions in different capacities—as banker, lender, and investor—with the Madoff firm over 20 years, the bank was uniquely positioned to see the fraud and put a stop to it. Instead, the trustee alleges that this institution continued to conduct business as usual, which allowed the firm to profit and the fraud to continue unabated. As with other actions, the Trustee has sought feeder fund recoveries based on the 90-day preference period and the 2- and 6-year periods, relying on both the federal bankruptcy statute and New York state law. In instances where the defendant was considered an insider, the Trustee sought to extend the preference period to 1 year. The Trustee has also sought to recover transfers to subsequent transferees. In a number of cases, for example, the Trustee has pursued this course against individuals paid a salary or a bonus by the banks or feeder funds involved. In hundreds of cases, the Madoff Trustee has reached settlements with former customers and others, either before or after filing clawback actions, and these agreements have produced recovery of a significant amount of assets. As of April 2012, the Trustee had entered into 441 settlement agreements in which the opposing parties agreed to return about $8.4 billion—an amount equal to about 49 percent of the approximately $17.3 billion in principal investments lost by customers who filed claims. According to our review, the settlement amounts range from a low of $36 to a high of $5 billion, with an average of $19 million and a median of $66,000. Through July 2012, the Trustee had collected 85 percent of total settlement amounts, or about $7.1 billion. Many settlement terms are complex. For example, settlements with feeder funds that were net losers require these entities to return certain funds they received before the Trustee will consider accepting their loss claims. The Trustee groups settlements into three major categories: Prelitigation: Settlements reached before the Trustee filed a clawback action. Litigation: Settlements reached in cases where the Trustee had already filed a clawback action. Customer avoidances: Recoveries based on the 90-day preference period, but where no clawback action was filed. A fourth category, “Funds not yet received,” is a temporary accounting of amounts that will be allocated to the three main categories upon receipt of the first settlement payment. Table 8 shows a summary of the Trustee’s settlement agreements by category as of July 2012. As the table shows, for example, the customer avoidance category represents 85 percent of all settlement agreements reached, although these cases have the smallest total dollar value among the categories. Although, as the table shows, the settlements total $8.4 billion, the Trustee through July 2012 had yet to receive $1.2 billion in settlement funds, due to pending appeals and specific provisions in settlement agreements. The Trustee told us he and his counsel consider a number of factors when deciding to enter into a settlement agreement. There are no formal criteria, he said, but factors such as location of the defendants, litigation risk, and timing of the clawback can influence the settlement decision. Two of the most important factors, the Trustee told us, are the defendant’s ability to pay and whether the settlement will produce proceeds that enhance the customer fund he is building during the liquidation. The bankruptcy court must approve settlement agreements worth $20 million or more. In addition to whether a settlement is in the best interest of the estate, the court also considers whether a proposed settlement is fair and equitable, and above “the lowest point in the range of reasonableness,” the Trustee told us. In deciding whether a particular agreement falls within that range, the court considers: probability of success in litigation; difficulties of collection; litigation complexity, and expense, inconvenience, and potential for delay; and creditor interests. On the key issue of ability to pay, the Trustee said his approach is that he would rather agree to a settlement for less money and be able to collect it, than win a judgment for a greater sum that he is ultimately unable to collect. He cited the Katz-Wilpon settlement as an example. Originally, the Trustee sued for more than $1 billion, but reached a settlement in April 2012 for $162 million, based on his conclusion of what the defendants could pay. In other areas, the Trustee also told us he is sensitive about clawing back funds from charitable institutions, saying that although they have received other investors’ funds, he did not want to put them in the position of raising money in order to pay a settlement. We examined all but one settlement agreement among those worth at least $20 million. Appendix IV provides details of these settlements, including amounts sought and obtained, the extent to which settlements have been paid, strategies behind settlements, and key provisions. A number of factors motivate counterparties to settle with him, the Trustee told us. One is that defendants have fiduciary duties to clients or customers, and amounts at issue in a case might be so significant that, for example, feeder fund managers might conclude their duty is to settle in order to recover assets for customers. Some parties, to protect client confidentiality, may settle in order to prevent disclosure of client information that could become public in litigation. Other parties might seek closure. Overall, the Trustee told us he thinks he has built a solid record in his settlements. The market itself has validated his efforts, he said, which can be seen in increases in the price of Madoff claims being traded following announcement of recent settlement agreements. As part of our review of records provided by the Trustee, we noted some customer accounts having a negative balance. For example, in the Picower case, the records showed a negative balance of $6.3 billion. In theory, this reflected some kind of margin account or debit account, the Trustee told us, even though such an amount would not have been in keeping with standard industry practices. Such negative balances raised questions whether the reported amounts represented debt owed by customers to the Madoff firm, and if so, whether the presence of such debt diminished the value of settlements obtained from such customers. To obtain the $7.2 billion Picower settlement, for example, the Trustee told us that he agreed to characterize the amount being returned as a loan repayment. However, both the Trustee and SIPC executives told us that the use of this terminology, or presence of negative balances, does not change the effect of underlying actions, which was to extract fictitious profits from the Madoff firm. In particular, the Trustee told us, he determined claims for all accounts based on the money invested less money withdrawn method. Purported indebtedness or negative balances did not affect those calculations, he said, and had no effect on settlement efforts. The Trustee created a hardship program in which he considered the degree of financial distress of Madoff customers in processing claims and deciding whether to pursue clawback litigation. He told us that the program was to recognize harm the Madoff fraud caused to former customers. No other SIPC liquidation has had such an option, the Trustee told us. This program, which was not open to institutional customers, had two elements for individuals who could demonstrate financial hardship: Claims. The Trustee provided expedited consideration of claim applications. This did not provide applicants with any more favorable treatment, as claims were still determined on the NIM basis. But it accelerated consideration of claim applications, which, if approved, could result in customers receiving any SIPC advances due them more quickly. According to the Trustee, if complete account information was available, a determination on qualification for expedited claim review was made in about 20 to 30 days, with another 20 to 30 days to determine the actual claim. This compares to a typical claim taking 3 months or more. Clawbacks. The Trustee would not sue for clawbacks, or would drop suits already filed. The Trustee told us he invited applications to the hardship program, but also included some claimants in the program on his own initiative. Figure 7 shows a breakdown of the Trustee’s consideration of hardship cases. According to the Trustee, he assessed general factors in considering hardship applications, but there were no formal criteria or decision rules. Instead, the Trustee told us, he applied his judgment after applications were reviewed at the Trustee’s counsel law firm. The general factors, applicable to both the claims and clawback elements of the program, included whether, due to lost investments or the possibility that funds withdrawn must be returned, the customer: needed to return to work, had declared bankruptcy, was unable to pay for living expenses, was unable to pay for dependents, or suffered from health problems. Additionally, the Trustee said he took into account whether former customers used any fictitious profits to pay taxes. If so, the Trustee told us he considered such payments in his decisions. The process for considering hardship applications was similar for both the claims and clawback elements, the Trustee told us. For claims, an attorney reviewed an application and made a recommendation to the Trustee, who then decided the matter. For clawbacks, the Trustee’s counsel team responsible for the litigation reviewed the application, and, if necessary, would seek additional information. The team would make a recommendation to a review committee, comprised of five lawyers, for its evaluation. The committee would make a recommendation to the Trustee, who then decided the matter. According to information we reviewed, hardship program applicants were predominantly older, and the most commonly cited reasons for financial distress, for either element, were the inability to pay living expenses and health problems. The Trustee said most cases were not difficult to decide, based on the evidence presented. He told us that while he and others reviewed applications carefully, requesting additional information when necessary, they attempted to not be overly intrusive into private affairs. We reviewed a number of applications provided by the Trustee, illustrating acceptance or rejection of customers’ hardship applications. Table 9 summarizes some of the cases we reviewed. As noted, the claims hardship part of the program did not alter outcomes, as it provided only expedited review. For the clawback portion, results varied by type of action, according to information the Trustee supplied in response to our queries. Figure 8 summarizes results. Within weeks of the Madoff firm’s failure, SEC officials were studying whether clawback actions were permissible, and concluded they were. SEC officials told us the issue was a difficult one, because innocent investors would become the target of lawsuits; at the same time, if the Trustee did not pursue the recovery actions, others would be hurt. From the beginning of the case, SEC thought that clawbacks would produce the bulk of assets available for distribution to customers, because billions of dollars had been withdrawn from the firm shortly before its failure. SEC officials told us they have had discussions with the Trustee (and SIPC) on his clawback litigation, on such issues as legal theories being employed, risks presented, legal costs, and expected future developments. But they have not had day-to-day involvement in the litigation, nor been involved in developing the legal strategies employed, the officials told us. SEC officials told us an examination will come later, but based on their experience, the Trustee appears to be conducting the litigation in an acceptable manner and has applied the law properly and fairly. They said they were not aware of any particular problems or issues with the litigation, although courts have not always adopted the Trustee’s positions. SEC officials told us they were not concerned about runaway or needless litigation, and the main outcome of the litigation is that the Trustee has recovered large sums that are considerably more than initially expected. SIPC executives likewise told us they always thought clawbacks would be a critical part of recovering assets for customers. SIPC did not have any discussions with the Trustee about clawbacks prior to his appointment, they said, other than to discuss whether there were adequate resources available at his law firm to bring the expected cases. The officials characterized their involvement with the Trustee as providing institutional knowledge and high-level advice and strategy, albeit with the understanding that the independent Trustee need not accept it. SIPC’s most significant contribution is payment of litigation expenses, SIPC executives told us, because otherwise, the Trustee could not finance his litigation. The SIPC executives told us they consider the Trustee’s litigation a success, but that his efforts are thus far incomplete, as many cases remain outstanding. Although total settlement amounts to date reflect mostly one case ($5 billion for Picower), other settlements still have produced significant sums. The cases the Trustee is pursuing are complex, involving difficult and time-consuming legal work to resolve, the executives said. Costs of the litigation are favorable when compared to what attorneys would collect in contingent lawsuits, they said. Because the Madoff fraud affects customers’ taxable income, it also affects federal tax collections by the U.S. Treasury. Madoff customers can seek tax relief for fraud-related losses in several ways, including one special procedure announced by IRS in the wake of the firm’s failure. However, IRS officials were unable to quantify the overall impact of the fraud on tax collections, and the impact may be reduced by various factors that could limit taxpayers’ ability to take full advantage of the tax relief available. In addition, while the use of either NIM or FSM to determine customer net equity could lead to different outcomes for account holders, either method likely reduces tax revenues. Tax experts expressed concerns about the lack of clarity over how payments stemming from fraud-related avoidance actions, or clawbacks, filed by the Trustee will be treated for tax purposes. After we identified concerns to IRS that lack of guidance could lead to taxpayer errors resulting in over- or underpayment of taxes, the agency issued such guidance. The Madoff fraud affected the federal income tax liabilities of former customers in two primary ways, according to our review. First, customers likely paid federal income taxes on fictitious profits reported to them in each year they held their account.theft of funds invested, which under tax law would be considered an investment theft loss. Second, they likely suffered Typically, there are two ways to address these effects upon discovery of such fraud, according to IRS officials. Taxpayers can file amended returns, in which they remove fictitious profits from previously reported income. In addition, they can claim a theft loss deduction against income, to reflect principal amounts stolen and fictitious profits reflected on account statements that were not removed on any amended returns. Under the Internal Revenue Code, an investment theft occurs when a taxpayer loses property to theft in connection with a transaction entered into for profit. Taxpayers can use amended returns or claim theft loss deductions, singly or in tandem, depending on their situation, according to IRS officials. But each of these approaches has limitations, according to the officials. For instance, taxpayers can generally file an amended return to claim a refund of taxes paid within 3 years of the original filing date, or 2 years from the date the tax was paid, whichever is later. In the case of the theft loss deduction, taxpayers cannot take the deduction to the extent they have been reimbursed for the loss, and if they have a claim for reimbursement, they cannot deduct their loss until the amount of recovery to be received can be “ascertained with reasonable certainty.” In the Madoff case, the Trustee is working to recover assets to benefit former customers, but how much he will ultimately recover, and by when, remains unknown. In addition to these typical remedies, IRS has also provided another option, referred to as a “safe harbor” approach. This allows taxpayers to deduct a percentage of lost principal including all previously reported profits in a single year—the year of a criminal charge against the perpetrator, or 2008 for the Madoff case. According to IRS, the purpose of the safe harbor is to ease the compliance burden for both the agency and taxpayers, avoiding what can be complicated questions on size and timing of a theft loss deduction. Under the safe harbor approach, taxpayers can deduct 95 percent of their losses in the year of discovery (when the lead figure is criminally charged). The loss is calculated by adding principal invested plus profits (whether fictitious or real), less cash withdrawals and recoveries from SIPC or other sources. For taxpayers seeking recovery from third parties, such as through lawsuits, the figure is reduced to 75 percent. If the taxpayer follows the safe harbor requirements, IRS agrees not to challenge the taxpayer’s treatment of a qualified loss as a theft loss, and taxpayers waive their right to other remedies that might have been available. The safe harbor approach deems the reasonable prospect of recovery condition of the standard theft loss deduction to be satisfied in the year of discovery. Thus, Madoff customers electing to use the safe harbor approach may be able to recognize their losses earlier than under the normal method for deducting a theft loss. However, if customers using the safe harbor approach later receive distributions of recovered assets from the Trustee that cause their claimed deduction to exceed their actual losses, they must report the excess amounts as income in the year received. IRS officials told us they could not say which method of claiming tax relief is best for taxpayers, because individual tax situations can vary widely. Tax practitioners to whom we spoke said the safe harbor is attractive for its certainty and ease of use, but some taxpayers may be better off using the traditional methods. However, one practitioner told us that anyone who can use the safe harbor should do so. According to IRS officials, the agency cannot determine the tax revenue loss to the U.S. Treasury that will result from Madoff customers seeking relief for their fraud-related losses. IRS cannot identify Madoff taxpayers, and even if it could, it does not collect necessary information to conduct a post-Madoff analysis of the fraud’s impact on tax revenues. IRS officials told us they generally do not maintain statistics on any particular Ponzi scheme or identified investment fraud. They also told us they cannot identify which Madoff customers are using which tax relief method, further complicating any effort to assess the impact of the fraud on tax revenues. In any case, although IRS cannot determine the amount of any revenue loss, the Madoff fraud’s effect on tax collections could be reduced by various factors that can limit taxpayers’ ability to take full advantage of their losses. These factors generally affect the ability of taxpayers to claim an investment theft loss deduction, rather than limit the ability to file amended returns. According to IRS officials, SIPC executives, and tax practitioners, factors affecting the ability to make use of the theft loss deduction include: Deductions need income. The theft loss deduction is a deduction against income, not a tax credit. Therefore, to use the deduction, taxpayers must have income to apply it against. If they do not have sufficient income, they cannot use all or part of the deduction. This could be a common situation, because many Madoff customers are older and without income. If taxpayers have insufficient income to make use of the deduction in a particular year, IRS rules allow theft loss deductions to be carried over to other years—generally, backwards for 3 years, and forward for 20. But taxpayers must still have income, and even if they do, it could take a number of years to fully apply their deductions on that income, meaning that benefits could be delayed. One tax practitioner told us that even with extended carryback and carryforward periods, he would expect many people— especially smaller investors—will not be able to use their deductions, for lack of income against which to apply it. “Leakage.” There can be considerable “leakage” when using the carryforward and carryback options for the theft loss deduction—that is, loss of other deductions when taking the theft loss deduction. Individuals claiming the theft loss deduction might also have other personal deductions, such as interest, taxes, or charitable contributions. With application of the theft loss carryback or carryforward amounts, income is reduced, with the result being that income against which to claim the personal deductions can be lost. With insufficient income against which to claim the personal deductions, they are lost as well, offsetting benefits of the theft loss deduction. This is because the personal deductions are not themselves subject to carryback or carryforward. Generalizing about the effects of leakage is difficult because such calculations are taxpayer-specific, but the effect can be substantial, according to one tax practitioner. Rate differences. A taxpayer may have initially paid taxes on fictitious profits at a relatively high marginal rate, but later realize a theft loss deduction at a lower rate. This can mean the actual value of fraud-related tax relief received is less than initial amounts of tax paid. For example, someone may have paid taxes at a 35 percent rate, but be subject to a 15 percent rate when claiming deductions, because, for example, they lost investment income or retired and their income has fallen. As a result, their deduction reduces their income by a smaller amount than the amount of taxes that they paid in the past when their income tax bracket was higher. Such a rate difference could be significant if a taxpayer uses the IRS safe harbor approach.A taxpayer may have received reported profits for a number of years, but be required, under the safe harbor approach, to deduct all losses in a single year. Such a large deduction in one year could reduce the taxpayer’s marginal tax rate from a high rate to a low rate. For example, a taxpayer may have paid taxes at the 35 percent rate, but by taking a deduction for all losses in a single year, find their tax rate averages out significantly lower. Further, the manner in which the Alternative Minimum Tax is calculated could also cause customers to realize the benefit of their theft loss deductions at a rate lower than when they initially paid taxes. Death. Should Madoff customers have died, estate and trust taxation issues could prevent full utilization of tax relief arising from the Madoff losses. For example, losses can offset estate income, but any losses remaining may not transfer with the property in subsequent tax considerations, according to one tax practitioner. In addition, other factors also stand to affect tax collections, either providing additional revenues or increasing revenue loss, according to our review. These factors include: Future tax liability. Taxpayers using the safe harbor approach may owe additional taxes in the future. By allowing taxpayers to claim 95 percent (or 75 percent) of their losses, the approach assumes a 5 percent (or 25 percent) recovery of assets by the Trustee or in other recovery proceedings. According to IRS officials, if actual recoveries exceed those amounts, taxpayers must declare the excess as income and pay taxes on that income. Currently, the Trustee expects recoveries to be at least 50 percent, meaning losses taxpayers have claimed under the safe harbor could be overstated, triggering the future tax liability. Other deductions. Investors can generally deduct expenses incurred in the production of income, IRS officials noted. That means that over the course of the Madoff fraud, customers would have been able to reduce taxable income based on any expenses the Madoff firm charged them. Madoff insiders. The Trustee told us that payments received by Madoff insiders raise tax issues. In some cases, Madoff made loans to immediate family, other relatives, and close associates. He often forgave such loans later, but the forgiven amounts were not reported to IRS as income, the Trustee said. In other cases, some people received large cash payments from Madoff that were not reported as income. Additionally, according to the Trustee, some insiders periodically asked Madoff to produce gains or losses on their accounts, presumably in order to offset income from non-Madoff sources for tax purposes. Timing. Even if the government surrenders tax revenue as Madoff customers realize tax relief, the U.S. government collected and had use of tax receipts for multiple years. Meanwhile, as discussed earlier, taxpayers may have difficulty making full use of available benefits today. Given time value of money, and difficulty in capitalizing on benefits, this is advantageous to the government, tax practitioners and others told us. While the use of NIM or FSM to determine net equity produce different outcomes for customers, both would likely reduce tax collections for the U.S. Treasury. Under NIM, net winners generally have their claims denied and are not eligible for reimbursement from the SIPC fund. Because SIPC payments reduce the amount that a taxpayer would claim as a loss, these customers would then likely have correspondingly larger theft loss deductions. In turn, those higher deductions could cause revenue losses for the Treasury that would not have been experienced if the Trustee had used FSM, which would have provided higher SIPC reimbursements. This can be seen in SIPC estimates for coverage under the two methods. Under NIM, SIPC estimates an outlay of $889 million for payment of SIPC advances to Madoff customers. If FSM had been used to value claims, SIPC executives estimate that SIPC reimbursements would have increased by an additional $1.2 billion to about $2.1 billion. According to IRS officials, an increase in SIPC coverage amounts—or any other coverage of losses—will correspondingly lower theft loss deductions. Ultimately, though, the choice of either claims determination method creates the potential for loss in tax revenues, because both NIM and FSM would create deductions against income by parties affected by the Madoff liquidation. While using FSM might have lowered theft loss deductions, owing to the greater SIPC reimbursements, it also would have caused greater demand on the SIPC fund, according to SIPC executives. As a result, SIPC’s broker-dealer members would have had to pay additional amounts to keep the fund at the level targeted by the SIPC board, the executives said. These greater amounts would have been either in the form of higher member annual assessments, or maintaining SIPC’s recently increased assessment for a longer period, they said. At this new assessment rate, SIPC members are currently paying about $400 million into the fund annually. Under SIPA, the assessments are an ordinary business expense, SIPC executives told us. Thus, they are deductible as business expenses for tax purposes by member broker-dealers, which would have the effect of lowering members’ taxable income (or increasing losses). As a result, rather than the U.S. Treasury facing lower tax collections from Madoff customers due to use of NIM, it would experience lower revenues from broker-dealers under FSM. Although this tax trade-off effect is straightforward to describe, estimating how, if at all, tax revenues would change under one method compared to the other is not possible, due to taxpayer-specific reasons described earlier. The Trustee told us that the effect on Madoff customers’ tax liabilities was not a consideration in his determination of how to calculate investor net equity. There is no statutory support for any such consideration, he said, and even if there was, considering tax implications would have created a substantial burden. To consider any tax implications, the Trustee said it would have been necessary to examine details of each account, which would have significantly increased the cost and amount of time to consider claims. Further, the Trustee told us he did not research or compile data on tax implications. However, he said he did provide information to IRS and the U.S. Department of Labor. As described earlier, a significant part of the Trustee’s efforts to recover assets for distribution to Madoff customers is his avoidance action, or clawback, litigation, in which he seeks to recover funds paid to certain customers. In general, if taxpayers, due to a clawback, return money previously paid to them, they are entitled to some reduction in tax liability as a result, IRS officials told us. However, application of relevant law, which deals with issues such as timing and nature of income, can be very taxpayer-specific, they said. IRS officials also told us initially that the agency did not have generally applicable guidance on the treatment of those payments. They said the agency had been seeking to formulate the right answer for dealing with clawbacks, but that it has otherwise provided only “factually specific guidance” on a case-by-case basis. In cases in which IRS has not issued such specific guidance, taxpayers must rely on the Internal Revenue Code, regulations, court cases, and relevant revenue rulings by the agency. Tax practitioners to whom we spoke noted uncertainties in determining how clawbacks should be treated for tax purposes, and that this makes completing income tax returns challenging and could contribute to errors. IRS officials to whom we spoke said they had not issued guidance on this topic because their general approach was to initially focus on issuing guidance in areas with more widespread effect, such as the safe harbor procedures. A part of IRS’s mission is to help taxpayers understand and meet their tax responsibilities, and more than a thousand Madoff account holders and others face the possibility of having to return funds to the Trustee as a result of clawbacks. Future financial fraud cases could involve clawbacks in their resolutions as well. Without additional guidance to taxpayers for such situations, the potential for taxpayer error is increased, which could lead to either over- or underpayment of taxes to the U.S. Treasury. A recent audit by the Department of the Treasury Inspector General for Tax Administration illustrated tax compliance issues surrounding investment theft losses. After reviewing what it said was a statistically valid sample of 140 returns claiming investment theft loss deductions for 2008, the inspector general’s audit estimated that 82 percent of 2,177 tax returns may have erroneously claimed deductions totaling more than $697 million and resulting in revenue losses of approximately $41 million. Three percent of the tax returns the inspector general sampled included taxpayers who claimed more than $215,000 in investment theft losses resulting from the Madoff scheme. This audit did not specifically investigate treatment of clawbacks. Given the number of taxpayers that could be affected by clawbacks, in the Madoff case or others, the lack of guidance could affect the accuracy of many tax returns and potentially involve billions of dollars in returned funds. We identified this concern to IRS and recommended in a draft report that the Commissioner of Internal Revenue ensure that the agency provide taxpayer guidance on a timely basis on the proper tax treatment of funds returned through avoidance actions or settlements arising from cases of investment fraud. Subsequently, IRS on September 5, 2012, issued such guidance, in the form of “frequently asked questions” on how to treat clawbacks, which were posted to the agency’s website. We provided a draft of this report to SEC, SIPC, IRS, and the Trustee for their review and comment, and each provided technical comments, which we have incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the SEC Chairman, the SIPC President, the Commissioner of Internal Revenue, and the Trustee for the Madoff liquidation. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202)-512-8678 or clowersa@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This report discusses (1) the extent to which account activity varied by type of customer of the failed Bernard L. Madoff Investment Securities, LLC firm; (2) the nature of claims filed, and rejected or approved, with the Trustee for reimbursement of losses following the firm’s failure due to fraud; (3) litigation and settlement activity the Trustee has pursued during the subsequent liquidation of the firm, in seeking to recover assets for distribution to customers; and (4) the effect of the Madoff fraud on customers’ federal income tax liabilities, including the effect on amounts that would have been due if investor losses had been based on customers’ reported final statement holdings. We excluded from our analysis transactions we identified as instances of the Madoff firm debiting foreign account holders for purported U.S. federal income tax withholding and paying those amounts to the Internal Revenue Service (IRS). These amounts, totaling about $330 million, were later credited to the accounts through a December 2011 settlement between the Trustee and IRS. identified with at least one transaction.because the account holder names did not provide sufficient information to make a determination. These 30 accounts had a total of $82 million in deposits and $117 million in withdrawals, each of which is less than 0.1 percent of all deposits and withdrawals. Using the account classifications and net investment status, we analyzed subgroups of the overall customer population, to examine potential differences in account activity. Specifically, we reviewed deposits, withdrawals, and timing of these transactions. In reviewing withdrawals, we also focused specifically on withdrawal of principal amounts, as distinct from fictitious profits. This is because under his authority to sue for return of assets, the Trustee can, among other things, presumptively seek to recover principal withdrawals during the 90-day period immediately before the Madoff firm failed. In our analysis, we relied on our own examination of data provided by the Trustee. The Trustee reviewed our results, noting some small differences between our review and his work, but generally confirming our findings. Upon further review of our work, we determined that the variances arose from differing methodologies and particular algorithms used to conduct the analysis. For example, we excluded from our analysis accounts that had no transactions, and we counted multiple allowed claims for one account as a single allowed claim. Further, in determining whether accounts were net winners or net losers, we summed all applicable deposits and withdrawals to determine net positions. In a small number of cases, this produced very small negative or positive account balances, which we considered to be different than zero. We also interviewed the Trustee and members of his law firm on customer-type and transaction- related issues. To assess the reliability of the account and transaction data provided by the Trustee, we interviewed members of the trustee’s counsel law firm and a contractor that manages the data, reviewed reports of the forensic accountants that assembled the data from records of the Madoff firm, and examined the data for invalid or missing data points. We concluded the data were sufficiently reliable for our purposes. We could not classify 30 accounts To examine the nature of claims filed and then rejected or approved following the Madoff firm’s failure, we obtained claims data from the Trustee. We tallied claims received and claims dispositions, while also examining the total claims population, and claims outcomes. We matched claims information with account information as described above to examine claim information by customer type and net investment status. As described above, we relied on our own examination of data provided by the Trustee. We also interviewed the Trustee and members of his law firm on claims-related issues. To assess the reliability of claims data provided by the Trustee, we interviewed members of the trustee’s counsel law firm and a contractor that manages the data and examined the data for invalid or missing data points. We concluded the data were sufficiently reliable for our purposes. To examine litigation and settlement activity the Trustee has pursued during liquidation of the Madoff firm, we obtained and analyzed court documents covering a range of legal activity. These included: lawsuits against net winners that are not alleged to have had knowledge of the fraud or been in a position to know about it—referred to by the Trustee as “good faith” defendants; lawsuits against individuals and entities the Trustee argues knew or should have known about the fraud—referred to by the Trustee as “bad faith” defendants; lawsuits against investment vehicles that collected funds from investors and invested them with the Madoff firm; and agreements the Trustee has reached to settle a number of the actions he has filed as part of his asset recovery efforts. We selected a random sample of 50 good faith actions for examination, from among more than 1,000 cases filed; we reviewed 29 publicly available bad faith complaints from among 30 such actions filed, analyzing the most frequently cited bases for the Trustee’s allegations of bad faith; we examined the largest complaints among 27 actions filed against feeder funds; and we reviewed the largest settlements the Trustee had reached during his litigation efforts. In addition, we examined Trustee records associated with the “hardship program,” in which the Trustee expedited claims processing or declined to pursue litigation for customers that could demonstrate financial distress. We interviewed the Trustee and members of the trustee’s counsel law firm on litigation and settlement issues, and we conducted legal research on the Trustee’s legal basis for pursuing asset recovery actions. We also interviewed Securities and Exchange Commission (SEC) officials and executives of the Securities Investor Protection Corp. (SIPC) for their views on conduct of Madoff-related litigation. To examine the effect of the Madoff fraud on customers’ federal income tax liabilities, including potential differences based on how investor losses were calculated, we examined relevant portions of the Internal Revenue Code and Internal Revenue Service (IRS) revenue rulings and revenue procedures. In addition, we reviewed a September 2011 audit by the Department of the Treasury Inspector General for Tax Administration. We also interviewed SEC and IRS officials, SIPC executives, and four individuals with income tax expertise, including a law professor and three tax advisors who we selected based on their tax experience or publications related to tax issues. We conducted this performance audit from March to September 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform our audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Avoidance actions, often called “clawbacks,” enable a bankruptcy trustee to set aside certain transfers of property made by the debtor within specified periods preceding a bankruptcy filing, in order to recover the property for the benefit of the estate and its creditors. These actions are authorized by the federal Bankruptcy Code (title 11, United States Code) as well as state laws, which section 544(b) of the Bankruptcy Code makes available to bankruptcy trustees. Avoidance powers apply fully to trustees conducting liquidations under the Securities Investor Protection Act (SIPA). As discussed below, two types of avoidance actions authorized by the Bankruptcy Code are preferential transfers (or preferences) and fraudulent transfers. Fraudulent transfers are further subdivided into two types: actual fraud and constructive fraud. Individual lawsuits brought by trustees can (and in the Madoff liquidation, often do) include preference avoidance counts, avoidance counts alleging both constructive and actual fraud under the Bankruptcy Code, and avoidance counts arising under state law. Indeed, the bankruptcy court has affirmed the Madoff Trustee’s right to bring a wide range of avoidance claims under both the Bankruptcy Code and New York law. A trustee’s avoidance powers are especially strong when the liquidation involves a Ponzi scheme. This is because courts have developed a series of specific interpretive rules, including the “Ponzi scheme presumption,” which work to the advantage of the trustee and to the disadvantage of the recipients of money paid out by Ponzi schemers—particularly when it comes to recovering payments that represent fictitious profits. Avoidance actions based on preferential transfers, which are governed by section 547 of the Bankruptcy Code, enable the trustee to avoid and recover payments the debtor made to creditors within 90 days preceding the bankruptcy filing, or up to 1 year prior to filing in the case of transfers to “insiders.” Under section 547(b) of the Code, a trustee may avoid any transfer of a property interest of the debtor to or for the benefit of a creditor made within the preference period “for or on account of an antecedent debt” while the debtor was insolvent, if the transfer enabled the creditor to receive more than the creditor would have received without it under the bankruptcy distribution. The U.S. Supreme Court has observed that preference avoidance is a “mechanism prevents the debtor from favoring one creditor over others by transferring property shortly before filing for bankruptcy.” There are several statutory exceptions to avoidance under section 547(c). For example, preference avoidance does not extend to payments of a debt incurred by the debtor “in the ordinary course of business.” However, the courts consistently hold that the concept of “ordinary course of business” has no application in a Ponzi scheme setting and, therefore, cannot provide a defense to preference avoidance. Furthermore, preference avoidance does not take account of the conduct of the recipient. Thus, it applies to all “transferees,” or recipients of transfers, including innocent victims of Ponzi schemes. For the above reasons, a trustee usually has no difficulty avoiding as preferential transfers Ponzi scheme payments made during the preference period. At the same time, the prevailing view is that preferential transfer actions can only reach payments that represent customer investments of principal in a Ponzi scheme—not fictitious profits. In the leading precedent on this subject, the court reasoned that since a Ponzi investor does not have a valid claim to fictitious profits, payouts based on them are not made on account of an “antecedent debt” as required for preference avoidance. This limitation has little if any practical effect, however, since fictitious profits paid during the preference period are recoverable through the fraudulent transfer avoidance actions described below. Avoidance actions based on fraudulent transfers are governed by section 548 of the Bankruptcy Code. Section 548(a)(1) provides that a trustee may avoid any transfer of an interest of the debtor in property made within 2 years before the filing date— A. if the debtor made the transfer “with actual intent to hinder, delay, or defraud any entity” to which the debtor was or became indebted; or B. if the debtor received “less than equivalent value in exchange for” the transfer and was insolvent at the time of the transfer, became insolvent as a result of the transfer, was engaged in business for which the debtor’s remaining property provided unreasonably small capital, or met one of several other conditions specified in section 548(a)(1)(B). Section 548 thus provides for two types of avoidance actions. An action based on section 548(a)(1)(A) is one for actual fraud. As the language indicates, it applies to transfers made by a debtor with actual intent to defraud. By contrast, an action based on section 548(a)(1)(B) is one for constructive fraud and does not require fraudulent intent on the part of the debtor. Instead, the key consideration is whether the transfer was made for less than equivalent value while the debtor was insolvent. As a practical matter, the distinction between actual and constructive fraud under section 548 has little significance in the unique context of a Ponzi scheme, where the trustee is seeking to recover fictitious profits. By virtue of the Ponzi scheme presumption, Ponzi scheme payouts are generally considered to be actually fraudulent. Case law has held that transfers beyond the principal investment lack value, making those transfers recoverable under the constructive fraud provisions. Thus, transfers of fictitious profits are subject to avoidance as both actual and constructive fraudulent transfers. The concept of fraud under section 548(a) focuses on the debtor- transferor rather than the transferees. Thus, all recipients of Ponzi scheme payouts, innocent or otherwise, are potentially subject to avoidance actions based on both actual and constructive fraud. However, the recipients’ status is important in determining the nature and extent of their liability. Section 548(c) of the Bankruptcy Code provides that the transferee in a fraudulent transfer avoidance action may retain any interest transferred upon showing that the interest was taken “for value and in good faith.” Principal invested in a Ponzi scheme is considered value with respect to a good faith transferee in this context, while fictitious profits are not. In general, therefore, the liability of good faith recipients in a Ponzi scheme-related fraudulent transfer action will be limited to fictitious profits paid to “net winners”—that is, investors who withdrew more than they invested in the scheme. Recipients who cannot establish their good faith are liable for the return of their principal investment as well as any fictitious profits paid out during the 2-year avoidance period. “ . . . If the investor knew or should have known that the debtor’s investment scheme was too good to be true, then the investor fails to carry his burden of proving that he accepted sums from the debtor in good faith, and the trustee is entitled to recover all amounts the investor received from the debtor.” McDermott, note 3, 176-177 (footnotes omitted). investor suspicious. Under this approach, an investor who fails to inquire into suspicious circumstances, or even one who inquires but conducts an inadequate inquiry, cannot establish a good faith defense. By contrast, other courts have applied a so-called “subjective” test, whereby investors are found to have acted in good faith so long as they lacked actual knowledge of the fraud or did not “turn a blind eye” in the face of obvious signs of fraud. One recent U.S. District Court opinion in a Madoff-related case endorsed the subjective test of good faith. As noted, a trustee, including a SIPA liquidator, can bring avoidance actions under state law as well as the federal Bankruptcy Code. Since the Madoff firm was formed as a New York limited liability company with its principal place of business there, the Madoff Trustee’s avoidance actions include claims under New York’s fraudulent conveyance law, codified in sections 270 through 281 of the New York Debtor and Creditor Law (DCL) (McKinney 2012). “Every conveyance made and every obligation incurred with actual intent, as distinguished from intent presumed in law, to hinder, delay, or defraud either present or future creditors, is fraudulent as to both present and future creditors.” Additionally, DCL section 276-a provides for the award of attorney fees in a successful avoidance action based on actual fraud where fraud on the part of the transferee is demonstrated. The main difference between avoidance actions under the federal Bankruptcy Code and the DCL relates to the reach-back period. In contrast to the 2-year Bankruptcy Code reach-back, DCL constructive fraud avoidance actions are subject to the general 6-year statute of limitations in section 213(1) of the New York Civil Practice Law and Rules (CPLR) (McKinney 2012). Thus, these actions can reach transfers made up to 6 years before the filing date. Under CPLR sections 203(g) and 213(8), the reach-back period for actions based on actual fraud is 6 years before the filing date or 2 years from the time the fraud was discovered or could have been discovered with reasonable diligence, whichever date is earlier. The Madoff Trustee has used this authority to seek avoidance and recovery of fraudulent transfers occurring more than 6 years before the filing date where the recipients received the transfers in bad faith. Claims by recipients of avoidable payments. Section 502(d) of the Bankruptcy Code generally requires disallowance of any claim against the bankruptcy estate by the recipient of an avoidable transfer, unless the recipient has paid over the avoidable amount. The Madoff Trustee has invoked this authority in seeking to temporarily disallow SIPA claims and other claims brought by some defendants in avoidance actions. Subsequent transferees. Under section 550 of the Bankruptcy Code, the trustee can seek recovery of the proceeds of an avoided transaction from not just the initial transferee but also from “subsequent transferees”—third parties who obtained funds from those receiving funds directly from the bankrupt entity. Thus, for example, the Madoff trustee can maintain avoidance actions against Madoff investors who received payouts through feeder funds or other intermediaries. However, there are limits to this authority. For example, a trustee cannot recover from a subsequent transferee who can show receipt of the proceeds for value, in good faith, and without knowledge that the transfer was avoidable. Also, an action under section 550 generally must be commenced within 1 year after the avoidance of the transfer for which recovery is sought against. Like the Bankruptcy Code provisions, sections 278 and 279 of the DCL authorize avoidance and recovery against subsequent transferees, except purchasers for fair consideration without knowledge of the fraud. Stockbroker safe harbor. One important area of uncertainty concerning securities-related Ponzi scheme avoidance actions is the applicability of the so-called “stockbroker safe harbor” or “stockbroker defense” provided in section 546(e) of the Bankruptcy Code. This provision exempts certain securities transactions from avoidance actions except those alleging actual fraud. Thus far, Madoff-related judicial decisions have reached different conclusions regarding the applicability of section 546(e). The bankruptcy court held that section 546(e) did not apply to the Madoff Ponzi scheme, while a district court held that it did. A number of lawsuits the Madoff trustee has filed against bad faith transferees contain common law claims in addition to avoidance counts under the Bankruptcy Code and the New York DCL. These claims include, for example, counts for conversion, unjust enrichment, and money had and received. A trustee has the authority and standing to sue to enforce claims that the debtor had prior to bankruptcy that represent property of the estate, including common law claims.decisions dealing with the Madoff liquidation emphasized limits on the However, two recent U.S. District Court Madoff Trustee’s ability to pursue common law claims. In both cases, the courts concluded that the Trustee could not pursue certain common law claims that, in the courts’ view, more appropriately belonged to creditors rather than the debtor and the estate. Furthermore, with respect to debtor claims, the courts noted that Madoff and his firm were wrongdoers, and that the Trustee stands in their shoes for purposes of pursuing common law claims. Therefore, the courts concluded, such claims would be subject to the legal principle that bars resolution of disputes between one wrongdoer and another. We obtained and analyzed transaction data for customers of the failed Bernard L. Madoff Investment Securities, LLC firm. Table 10 shows account information and status for the 10 largest Madoff accounts by transaction volume. Table 11 shows account information and status for the 10 largest Madoff accounts by total withdrawals. Table 12 shows account information and status for the 10 largest Madoff accounts by “net winnings,” that is, the amount by which an account holder withdrew more than they invested in the Madoff firm. In hundreds of cases, the Madoff Trustee has reached settlements with former customers and others, either before or after filing clawback actions, and these agreements have produced recovery of a significant amount of assets. This appendix provides details of these settlements, including amounts sought and obtained, the extent to which settlements have been paid, strategies behind settlements, and key provisions. As of April 2012, the Trustee had entered into 441 settlement agreements in which the opposing parties agreed to return about $8.4 billion—an amount equal to about 49 percent of the approximately $17.3 billion in principal investments lost by customers who filed claims. Based on a bankruptcy court review threshold, we examined all but one settlement agreement worth at least $20 million. Table 13 provides a summary of the 13 settlements we identified. These settlements account for more than 97 percent of all settlement amounts. They include settlements from family estates, such as Picower, Shapiro, and Levy, as well as feeder funds, in the case of UBP, Optimal, Fairfield, Mount Capital, and Trotanoy Investment. The group also includes settlements with charitable foundations and the federal government. As the table shows, the amounts collected by individual case vary widely. Among these settlements, the Optimal case has played an influential role, the Trustee told us. Optimal was the sponsor of two feeder funds, and it was an early case with potentially risky legal issues present, such as jurisdiction and foreign entities—it was not clear the U.S. Bankruptcy Code or SIPA apply outside the United States, the Trustee told us. The Trustee secured an agreement requiring Optimal to return 85 percent of amounts received during the 90-day preference period. Included was a term providing that if the Trustee settled with another party on more advantageous terms, Optimal would also receive those terms. The 85 percent figure effectively became the benchmark for settlements of similar claims, the Trustee told us. In turn, the Trustee also set a complementary benchmark for good faith claims at 95 percent, in consideration of the comparative ease in making such cases compared to bad faith actions. The Trustee told us that in the Optimal settlement, he considered it important to secure the agreement in order to build the customer fund. However, looking back, he said the “most favored nation” provision has been an issue. Optimal recently attempted to invoke the clause following the Trustee’s recent settlement with Hadassah. Due to the complexity of these cases, it is difficult to discern what constitutes a settlement to which the clause would apply, the Trustee said. Based on our examination of the largest settlement agreements, table 14 provides a summary of key settlement provisions. The Trustee’s settlement agreements also go beyond simply obtaining cash agreements, we found, as allowance of customer claims and SIPC advances have also been key components of settlements of feeder fund cases. For example, the Tremont settlement, in addition to its $1.025 billion settlement amount, also included allowing more than $3 billion in customer claims and granting SIPC advances for eligible accounts. With the exception of the Katz-Wilpon case, all seven settlements that included allowed customer claims as part of the agreement were feeder fund cases. Because these feeder funds were net losers, the settlement agreements granted SIPC advances to each of the funds that directly held Madoff accounts. In the Katz-Wilpon agreement, the allowed customer claim was assigned to the Trustee, so that customer distributions will fund the $160 million settlement payment. For both the Picower and Levy settlements, the Trustee told us he believed he obtained the largest sums possible. The $5 billion settlement in the Picower case represents 100 percent repayment of funds received by the Picower estate and related investors named in the complaint. Similarly, the Levy settlement, reached before litigation was filed, represents nearly 100 percent of the amounts the Levy account holders withdrew during the 6 years prior to the Madoff firm’s collapse. Additionally, both Picower and Levy withdrew their customer claims as part of the settlements, the Trustee said. In addition to the contact named above, Cody J. Goebel, Assistant Director; Donald W. Brown; Daniel S. Kaneshiro; Jonathan M. Kucskar; David J. Lin; Marc W. Molino, Barbara M. Roesmann; Christopher H. Schmitt; Andrew J. Stephens; Ethan S. Wozniak; and Henry R. Wray made major contributions to this report.
After the collapse of Bernard L. Madoff Investment Securities, LLC--a brokerdealer and investment advisory firm with thousands of individual and institutional clients--the Securities Investor Protection Corporation (SIPC), which oversees a fund providing up to $500,000 of protection to qualifying individual customers of failed securities firms, selected a trustee to liquidate the Madoff firm and recover assets for its customers. In March 2012, GAO issued GAO-12-414 , which examined selection of the Trustee, his method for determining customer claims, and expenses of the liquidation, among other things. This report discusses (1) the extent to which account activity varied by type of Madoff customer, (2) the nature of claims filed, and rejected or approved, with the Trustee for reimbursement of losses, (3) litigation and settlement activity the Trustee has pursued in seeking to recover assets for distribution to customers, and (4) the effect of the fraud on customers' federal income tax liabilities. GAO reviewed transaction and claims data from the Trustee, lawsuits filed by the Trustee, IRS rules and guidance, and interviewed the Trustee, private sector tax experts, and officials from IRS, SIPC, and the Securities and Exchange Commission. GAO's analysis of Madoff account data shows that more than three-fourths of the firm's customers were individuals and families (individuals). The remaining accounts were held by institutions, such as pension funds and charities. A higher proportion of accounts held by an individual (60 percent) were "net winners" based on their net equity position--meaning they had withdrawn more from their accounts than they had deposited--compared to accounts held by institutions (50 percent). Correspondingly, 40 percent of institutional accounts were "net losers" that had deposited more into their accounts than they had withdrawn, compared to 29 percent of individuals' accounts that were net losers. However, individual and institutional accounts had similar deposit and withdrawal activity from 1981 through 2008, including increased withdrawals immediately before the firm's failure in December 2008. GAO's analysis shows that the Trustee's decisions to accept or reject claims were similar for individual and institutional account holders. Of the more than 16,000 claims, about 66 percent were denied because the customers were not direct account holders of the Madoff firm, but instead had invested in funds or other vehicles that held accounts directly with the firm. For the remaining claimants who were directly invested, the Trustee generally used the customers' net investment positions--that is, whether they were net winners or net losers-- to determine claims. In examining claims decisions by customer type, GAO found the Trustee denied claims filed by individuals and institutions determined to be net winners in similar proportions. Similarly, most claims filed by individuals or institutions determined to be net losers were allowed. The Trustee has been pursuing litigation to recover, or "claw back," assets from net winner customers and others that can be used to reimburse customers that did not withdraw all of their principal investments. For those customers that withdrew fictitious profits--net winners--the Trustee has been pursuing more than 1,000 lawsuits to recover funds, as allowed under federal bankruptcy law and state law. In about 60 suits, the Trustee has sought more than fictitious profits, to include principal or other funds received, arguing the parties knew or should have known of the fraud. Thus far, the Trustee said he has recovered about $9.1 billion of the $17.3 billion in principal investments lost by customers who filed claims, including $8.4 billion from settlement agreements. Because the Madoff fraud affects customers' taxable income, it also affects tax collections by the Department of the Treasury. Under Internal Revenue Service (IRS) rules, Madoff customers can deduct lost principal and fictitious profits on which they paid taxes while holding their accounts. However, IRS does not maintain statistics on specific frauds or their impacts on tax collections, and the tax impact may be reduced because some taxpayers may not be able to fully use this tax relief, such as those that lack other income that can be offset by these deductions. Tax experts expressed concerns about the lack of clarity over how payments stemming from fraud-related avoidance actions filed by the Trustee will be treated for tax purposes. In response to a recommendation in a draft report that IRS provide guidance to help limit taxpayer errors resulting in over- or underpayment of taxes, the agency issued such guidance on September 5, 2012, in the form of "frequently asked questions" posted to its website.
Federal courthouses vary in size and scope. While typically, one to five district court judges are located in small- to medium-sized courthouses, in several large metropolitan areas, 15 or more district judges are located in a single courthouse. Courthouses may also include space for appellate, bankruptcy, and magistrate judges, as well as other tenants. There are 94 federal judicial districts—at least 1 for each state—organized into 12 regional circuits. The Administrative Office of the U.S. Courts is an agency within the judicial branch and serves as the central support entity for federal courts under the supervision of the Judicial Conference. The Judicial Conference of the United States, which serves as the judiciary’s principal policy- making body, periodically assesses the need for additional judgeships for the nation’s appellate, district, and bankruptcy courts and recommends additional judgeships to Congress, specifying the circuit or district for which the additional judgeship is requested. GSA and the judiciary plan new federal courthouses based on the judiciary’s estimated 10-year space requirements, which are based on projections of each location’s weighted filings. It then uses this information to determine how many judges to plan for. Except for appeals court judges, who sit on panels of three or more, the judiciary requested one courtroom per estimated judge for courthouses built from 2000 through 2009, although it occasionally planned for senior judges to share courtrooms. The U.S. Courts Design Guide (Design Guide) specifies the judiciary’s space and design standards for court-related elements of courthouse construction. In 1993, the judiciary also developed a space planning program called AnyCourt to determine the amount of court- related space the court will request for a new courthouse based on Design Guide standards and estimated staffing levels. For courthouses that are selected for construction, GSA typically submits two detailed project descriptions, or prospectuses, for congressional authorization: one for site and design and the other for construction. These prospectuses outline the scope, size, and estimated costs of the project at each of the two project phases, and typically request authorization and funding to purchase the site and design the building in the site and design prospectus—and to construct the courthouse in the construction prospectus. Typically, the total gross square footage of the courthouses depicted in the construction prospectus or fact sheet is based on factors that include the judiciary’s projected need for space, developed from 10-year judge estimates, and the gross square footage reserved for building common and other space, such as public lobbies and hallways, atriums, elevators, and mechanical rooms. The amount of gross square footage estimated for this space is based on GSA’s specification that a courthouse should be 67 percent efficient, meaning that 67 percent of the total gross square footage, excluding parking, should consist of tenant space (space assigned to the courts and other tenants) and the rest should be building common and other space. Congressional committees authorize and Congress appropriates funds for courthouse projects, often at both the design and construction phases. Congressional authorizations of courthouse projects typically include the gross square footage of the planned courthouse as described in the prospectus and the funding requested. After funds have been appropriated, GSA selects private-sector firms for the design and construction work through a competitive procurement process. GSA also manages the construction contract and oversees the work of the construction contractor. After courthouses are occupied, GSA charges each tenant agency, including the judiciary, rent for the space it occupies and for its respective share of common areas, including mechanical spaces. GSA considers some space in buildings, such as vertical penetrations, including the upper floors of atriums, non-rentable space. In fiscal year 2009, the judiciary’s rent payments totaled over $970 million. The judiciary has sought to reduce the payments through requests for rent exemptions from GSA and Congress and internal policy changes, such as annually capping rent growth and validating rental rates. The 33 federal courthouses completed since 2000 include 3.56 million square feet of extra space—28 percent of the total 12.76 million square feet constructed. The extra square footage consists of space that was constructed above the congressionally authorized size, due to overestimating the number of judges the courthouses would have, and without planning for courtroom sharing among judges. Overall, this space represents about 9 average-sized courthouses. The estimated cost to construct this extra space, when adjusted to 2010 dollars, is $835 million, and the annual cost to rent, operate, and maintain it is $51 million (see fig. 1). More specifically, the extra space and its causes are as follows: 1.7 million square feet caused by construction in excess of congressional 887,000 extra square feet caused by the judiciary overestimating the number of judges the courthouses would have in 10 years; and 946,000 extra square feet caused by district and magistrate judges not sharing courtrooms. Thirty-two of the 33 courthouses include extra space attributable to at least one of these three causes and 19 have extra space attributable to all three causes. In addition to the one-time construction cost increase, the extra square footage in these 32 courthouses causes higher annual operations and maintenance costs, which are largely passed on to the judiciary and other tenants as rent. According to our analysis of the judiciary’s rent payments to GSA for these courthouses at fiscal year 2009 rental rates, the extra courtrooms and other judiciary space increase the judiciary’s annual rent payments by $40 million. In addition, our analysis indicates that other extra space cost $11 million in fiscal year 2009 to operate and maintain. Typically, operations and maintenance costs represent from 60 to 85 percent of the costs of a facility over its lifetime, while design and construction costs represent about 5 to 10 percent of these costs. Therefore, the ongoing operations and maintenance costs for the extra square footage are likely to total considerably more in the long run than the construction costs for this extra square footage. Twenty seven of the 33 federal courthouses constructed since 2000 exceed their congressionally authorized size, and 15 of the 33 courthouses exceed their congressionally authorized size by 10 percent or more. For example, the O’Connor Courthouse in Phoenix was congressionally authorized at 555,810 gross square feet but is 831,604 gross square feet, an increase of 50 percent. As shown in figure 2, altogether, these 27 courthouses have about 1.7 million more square feet than authorized. On the other hand, as shown in figure 3, 6 of the 33 courthouses are smaller than congressionally authorized. Twelve of the 15 courthouses that exceed the congressionally authorized gross square footage by 10 percent or more also had total project costs that exceeded the total project cost estimate provided to congressional authorizing committees. The total project costs for 8 of these 12 courthouses increased by between 1 and 9 percent over the cost estimate provided to congressional authorizing committees at the construction phase, while the total project costs for four of these courthouses increased by between 10 and 21 percent over the cost estimate provided to congressional authorizing committees at the construction phase. While there is a statutory requirement that GSA obtain advance approval from the Committees on Appropriations if the expenditures for a project exceed the amount included in an approved prospectus by more than 10 percent, there is no statutory requirement for GSA to notify congressional authorizing or appropriations committees if the size exceeds the congressionally authorized square footage. While GSA sought approval from the appropriations committees for the cost increases incurred for the 4 courthouses whose size and costs increased by about 10 percent or more, GSA did not explain to these committees that the courthouses were larger than authorized and therefore did not attribute any of the cost increase to this difference. For example, the total project cost of the Coyle U.S. Courthouse in Fresno, California, (about $133 million) was about $13 million over the estimate provided to congressional authorizing committees before construction (an increase of 11 percent), while the courthouse is about 16 percent larger than its authorized gross square footage. In requesting approval from the appropriations committees for additional funds for the Coyle U.S. Courthouse, GSA stated that, among other things, additional funds were needed for fireproofing and electrical and sewer line revisions—but did not mention that the courthouse was 16 percent larger than authorized. Because the construction costs of a building increase when its gross square footage increases, the cost overruns for this courthouse would have been smaller or might have been eliminated if GSA had built the courthouse to meet the authorized square footage. All seven courthouses we examined as case studies had increases in size made up, at least in part, of increases in building common and other space. Five of the seven courthouses also had increases in tenant space. In all seven of the case study courthouses, the increases in building common and other space were proportionally larger than the increases in tenant space, leading to a lower efficiency than GSA’s target of 67 percent. According to GSA officials, a building’s efficiency is important because, as it declines, less of the building’s space directly contributes to the tenants’ mission-related activities. In addition, for a given amount of tenant space, meeting the efficiency target helps control a courthouse’s gross square footage and therefore its costs. See table 1. GSA lacked sufficient control activities to ensure that the 33 courthouses were constructed within the congressionally authorized gross square footage, initially because it had not established a consistent policy for how to measure gross square footage. GSA established a policy for measuring gross square footage by 2000, but has not ensured that this space measurement policy was understood and followed. Moreover, GSA has not demonstrated it is enforcing this policy because all 6 courthouses completed since 2007 exceed their congressionally authorized size. According to GSA officials, the agency did not focus on ensuring that the authorized gross square footage was met in the design and construction of the courthouse until 2007, even though, according to GSA officials, controlling the gross square footage of a building is important to controlling its construction costs. All seven of the courthouses we examined in our case studies had increases in building common and other space—such as mechanical spaces and atriums—as compared with the square footage planned for these spaces within the congressionally authorized gross square footage. The percent increases over the planned space ranged from 19 percent to 102 percent. According to a GSA official, at times, courthouses were designed to meet various design goals without an attempt to limit the size of the building common or other space to the square footage allotted in the plans provided to congressional authorizing committees—and these spaces may have become larger to serve a design goal as a result. For example, the building common and other space in the Eagleton U.S. Courthouse in St. Louis is 77 percent larger than planned, and the courthouse has an efficiency of 56 percent. While we could not determine the cause of all of this additional space, all courtroom floors of the St. Louis courthouse have mechanical rooms near the courtrooms, and in total, the mechanical space in the St. Louis courthouse takes up proportionally more space than it does in the DeConcini U.S. Courthouse in Tucson, Arizona. In addition, the Eagleton U.S. Courthouse in St. Louis has two empty elevator shafts—rising all 33 floors—that were built but are not used. Together, the mechanical space and the elevator shafts bring the efficiency of the Eagleton U.S. Courthouse well below GSA’s target of 67 percent and limit the proportion of the building’s total space that contributes to mission-related activities. Moreover, regional GSA officials stated that they were unaware until we told them that the courthouse was larger and less efficient than authorized. Another element of GSA’s lack of oversight in this area was that GSA did not ensure that the architect followed GSA’s policies for how to measure certain commonly included spaces, such as atriums. According to GSA officials, a primary reason why the Limbaugh, Sr., U.S. Courthouse in Cape Girardeau, Missouri, and the Bryant U.S. Courthouse Annex in Washington, D.C., exceeded their congressionally authorized square footage is that the architect did not consider the upper atrium levels as part of the gross square footage of the courthouse—in conflict with GSA’s standards for measuring atrium space. In GSA’s policy for determining a building’s gross square footage, the atrium space is counted on all floors because multifloor atriums increase a building’s volume and gross square footage and thus its costs. However, according to GSA officials, GSA’s practice in the early 2000s—when the Limbaugh, Sr., and Bryant Courthouses were under design—was to rely on the architect to measure and validate the plans for the courthouse, and GSA did not expect its regional or headquarters officials to monitor or check whether the architect was following GSA’s policies. Although GSA officials emphasized that open space for atriums would not cost as much as space completely built out with floors, these officials also agreed that there are costs associated with constructing and operating atrium space. In fact, the 2007 edition of the Design Guide, which reflects an effort to impose tighter constraints on future space and facilities costs, emphasizes that courthouses should have no more than one atrium. GSA’s lack of focus on meeting authorized square footage also contributed to increases in the size of tenant spaces in five of our seven case study courthouses. For example, the Ferguson, Jr., U.S. Courthouse in Miami has about 46,924 more square feet of tenant space than planned. The district court has about 20,768 more square feet of space in this courthouse than planned. Among other things, the 14 regular district courtrooms built in this courthouse are each about 2,800 square feet—17 percent larger than the Design Guide standard of 2,400 square feet—while the two special proceedings courtrooms on the 13th floor are each about 3,200 square feet, about 7 percent larger than the Design Guide standard of 3,000 square feet. GSA officials stated that courtroom space is among the most expensive of courthouse spaces to construct and the Design Guide’s criteria are in part meant to help ensure that courthouses are built to be cost-effective as well as functional. In addition, some courthouses encompass more courtroom space than planned because during the planning stages, neither the judiciary nor GSA took into account the possibility that the design of the courthouse could double the square footage attributable to each courtroom. Courthouses have been designed in various ways to address the height requirement for courtroom ceilings. For example, in a collegial floor plan, courtroom floors alternate with floors for judicial chambers and other spaces that do not need higher ceilings, so that each floor can be built to a height that is suitable for the rooms it contains. However, because federal courthouses have typically been built with judges’ chambers on the same floors as the courtrooms, some courthouses have courtrooms on floors designed to hold rooms with 10-foot ceilings, and the ceiling of each courtroom is cut out so that each courtroom takes up two floors. For example, the Eagleton U.S. Courthouse in St. Louis and the Bryant U.S. Courthouse Annex in Washington, D.C., were constructed with courtrooms that span two floors. According to GSA’s policy, when a courthouse is designed so that a courtroom takes up two floors, the space on the second floor—referred to as a tenant floor cut—is considered part of the gross square footage of the building and—if it would otherwise be usable space—is also considered to be court-occupied space. Therefore, in this type of courthouse, each courtroom is counted as having double the square footage of the courtroom floor. Although the extra square footage in this type of courtroom is multistory space, like the extra square footage in atriums, and therefore, according to GSA, costs less than square footage that is completely built out, nevertheless there are costs associated with this space. Judiciary officials said that space planning is done well before they know if they will need to incorporate additional space for tenant floor cuts in courtrooms. Under the judiciary’s current automated space planning tool, AnyCourt, which the judiciary uses to determine how much court-related space to request for a new courthouse, the Design Guide’s standard of 2,400 square feet is provided for each district courtroom planned for a new courthouse. However, because the gross square footage requirements that GSA identifies in the prospectus to congressional committees are based on AnyCourt’s output for the amount of space needed by the courts, for courthouses designed with district courtrooms that have tenant floor cuts, the AnyCourt program identifies only half of the square footage attributable to the courtroom when calculating the courthouse’s gross square footage following GSA’s standards. If GSA requests court space based on the AnyCourt model, it therefore may not be requesting sufficient space for courtrooms to account for courtrooms that are designed with tenant floor cuts. Recently, GSA has taken some steps to improve its oversight of the courthouse construction process. In May 2009, GSA published a revised space assignment policy to clarify and emphasize its policies on counting the square footage of atriums and tenant floor cuts, among other things. In addition, according to GSA officials, GSA established a collaborative effort in 2008 between its Office of Design and Construction and its Real Estate Portfolio Management to, among other things, use data management software to ensure that GSA’s space guidelines are followed in the early planning phases of courthouse projects. It is not yet clear whether these steps will establish sufficient oversight to ensure that courthouses are planned and constructed within the congressionally authorized square footage. Our analysis of construction plans for the 33 courthouses built since 2000 shows that 28 have reached or passed their 10-year planning period and 23 of those 28 courthouses have fewer judges than estimated. Overall, the judiciary has 119, or approximately 26 percent, fewer judges than the 461 it estimated it would have. As a result, these 23 courthouses have extra courtrooms, chamber suites, and related support, building common, and other spaces covering approximately 887,000 square feet (see fig. 4). Six of the seven case study courthouses we reviewed have reached the end of their 10-year planning period and were designed for more judges than they actually have. Table 2 compares the estimated and actual numbers of judges for each of these courthouses and the space consequences of overestimating the number of judges. Figure 5 illustrates two unassigned chamber suites in the Coyle Courthouse in Fresno, California. Inaccurate caseload growth projections led the judiciary to estimate a need for more judges and subsequently overestimate the need for space for some courthouse projects. In a 1993 report, we questioned the reliability of the caseload projection process the judiciary used. For this report, we were not able to determine the degree to which inaccurate caseload projections contributed to inaccurate judge estimates because the judiciary did not retain the historic caseload projections used in planning the courthouses. However, judiciary officials at three of our site visit courthouses indicated that the estimates used in planning for these courthouses inadvertently overstated the growth in district case filings and, hence, the need for additional judges. For example, for the Eagleton Courthouse in St. Louis, judiciary officials said the district estimated that it would need four additional district judges by 2004 to handle a high level of estimated growth in case filings; however, that case filing growth never materialized and the Eagleton Courthouse has the same number of authorized judges that it had in 1994 when the estimates were made. Specifically, the Eastern District of Missouri, in which the Eagleton Courthouse is located, had 3,182 case filings in 1994 and 3,241 case filings in 2008 (see fig. 6). Limitations of the judiciary’s 10-year judge estimates are also due, in part, to the challenges associated with predicting how many judges will be located in a courthouse in 10 years leading the judiciary to overestimate how many judges it would have in courthouses after 10 years or more. Determining how many requested judgeships will be authorized is also challenging for several reasons. First, Congress has authorized fewer positions than the judiciary has requested over the years. It has been 20 years since Congress passed comprehensive judgeship legislation. Yet the judiciary did not incorporate historic trends into its planning for new courthouses. Instead, it requested new courthouses that could accommodate the number of judges it would have if all of its estimated judgeships were approved, and some of the excess space in new courthouses reflects the judiciary’s receipt of fewer judgeships than it requested. Problems with the reliability of the weighted caseload data— the workload indicator that the judiciary uses to decide when a new judge is needed—can undermine the credibility of the judiciary’s requests for new judgeships. For example, in a 2009 hearing, a member of Congress cited a lack of reliability in weighted caseload to question if all of the requested judgeships are necessary. In a 2008 report, we found that weighted caseload is not reliable because its accuracy for district and appeals courts cannot be tested. A second challenge the judiciary faces in estimating how many judges it will need for specific courthouses is that judgeships are requested and thus authorized at the district or circuit levels as a whole, rather than for a specific courthouse. Hence, it is hard to predict which courthouses the additional judgeships requested in the Federal Judgeship Act of 2009, if enacted, would be assigned to if the positions were authorized. However, the judiciary’s estimation process does not take this uncertainty into account. For example, in 2009, the judiciary requested 18 judgeships for districts that contain courthouses built since 2000, but not all of the judges for these requested judgeships, if approved by Congress, would necessarily be placed in those courthouses. Most courthouses constructed since 2000 have enough courtrooms for all of the district and magistrate judges to have their own courtrooms. Using the judiciary’s data, we designed a model for courtroom sharing that shows that judges could share courtrooms at a high enough level to reduce the number of courtrooms needed in 27 of the 33 district courthouses built since 2000 by a total of 126 courtrooms—about 40 percent of the total number of district and magistrate courtrooms constructed since 2000. In total, not building these courtrooms and their associated support, building common, and other spaces would have reduced construction by approximately 946,000 square feet (see fig. 7). According to the judiciary’s data, courtrooms are used for case-related proceedings only a quarter of the available time or less, on average. Furthermore, no event was scheduled in courtrooms for half the time or more, on average. Figure 8 illustrates the average daily uses of courtrooms assigned to single district, senior district, or magistrate judges. These low levels of courtroom usage are consistent across courthouses regardless of case filings. Specifically, the judiciary’s data showed no correlation between the number of weighted and unweighted cases filed in a courthouse and the amount of time courtrooms are in use. Although the judiciary uses weighed case filings as the measurement criteria for requesting additional judgeships, this representation of higher levels of activity does not translate into higher courtroom usage rates, according to the judiciary’s courtroom use data. According to the data, courthouses located on the nation’s border and those with higher pending caseloads do make greater-than-average use of their courtrooms, but other courthouses in the same districts offset that higher use for district and senior district judges’ courtrooms. Based on the low levels of use indicated by the judiciary’s data, we found that sharing is feasible in 27 of the 33 district courthouses built since 2000 and could have resulted in the construction of 126 fewer courtrooms— 40 percent of all district and magistrate courtrooms in those courthouses. The Design Guide in place when these courthouses were built encouraged judicial circuits to adopt courtroom-sharing policies for senior judges. However, most of the courthouses constructed since 2000 provided enough courtrooms for all district and magistrate judges to have their own courtrooms. The 2008 study by the judiciary states that the data collected during the study could be used with computer modeling to determine how levels of use might translate into potential sharing opportunities for judges, but that such a determination was outside the scope of the study. As a result, we applied generally accepted modeling techniques to the judiciary’s data to develop a computer model for sharing courtrooms. The model ensures sufficient courtroom time for (1) all case-related activities; (2) all time allotted to non-case-related activities, such as preparation time, ceremonies, and educational purposes; and (3) all events cancelled or postponed within a week of the event. Under our model, the remainder of time remains unscheduled— approximately 18 percent of the time for district courtrooms and 22 percent of the time for magistrate courtrooms on average. In this way, our model includes substantial time when the courtroom is not in use for case proceedings. Some non-case related events could be held outside of normal business hours, and 60 percent of events are cancelled or postponed within 1 week of the event’s original date, according to the judiciary’s data. Not allocating time in the model for these purposes would create even more opportunity for sharing; however, we chose to include these data, keep the model conservative, and allow for unpredictability. The judiciary’s report also included a section of case studies based on in- depth interviews with judges at courthouses where judges share courtrooms. These interviews suggested that courtrooms can be shared in two ways: (1) dedicated sharing, in which judges are assigned to share specific courtrooms, and (2) centralized sharing, in which all courtrooms are available for assignment to any judge based on need. Our model shows the following possibilities for dedicated courtroom sharing, with additional unscheduled time to spare. See table 3. Our model shows that centralized sharing further improves efficiency by increasing the number of courtrooms each judge can access, whereas in dedicated sharing judges only use the shared courtroom assigned to them. We used the model to estimate how the courtrooms in one courthouse could be shared both ways. Specifically, to illustrate the increased efficiency of centralized sharing over dedicated sharing, we applied the two types of sharing to the current district and magistrate judges in the Ferguson Courthouse in Miami, Florida. Currently, the Ferguson Courthouse has 26 courtrooms for 26 judges, including 12 district judges, 3 senior district judges and 11 magistrate judges (two of whom are recalled). Under a dedicated sharing model, the Ferguson Courthouse could accommodate these judges in 15 courtrooms. Under a centralized sharing model, in which all district judges have access to all district judge courtrooms and all magistrate judges have access to all magistrate courtrooms, the number of needed courtrooms is reduced to 14. Table 4 shows the levels of sharing possible and the amount of space that could be eliminated for all of our seven case study courthouses through centralized sharing. We solicited expert views on the challenges related to courtroom sharing through interviews with judges and court administrators on site visits to courts with sharing experience and assistance from the National Academy of Sciences in assembling a panel of judicial experts. While some judges remained skeptical that courtroom sharing among district judges could work on a permanent basis, judges with experience in sharing courtrooms said that they overcame the challenges when necessary and trials were never postponed because of sharing. The primary concern judges cited was the possibility that a courtroom might not be available. They stated that the certainty of having a courtroom available encourages involved parties to resolve cases more quickly. They further noted that courtroom sharing could be a disservice to the public if it meant that an event had to be rescheduled for lack of a courtroom; in that case, defendants, attorneys, families and witnesses would also have to reschedule, costing the public time and money. To address the concern that a courtroom would not be available when needed, we programmed our model to provide more courtroom time than necessary to conduct court business. Most judges with experience sharing courtrooms agreed that court staff must work harder than in nonsharing arrangements to coordinate with judges and all involved parties to ensure that everyone is in the correct courtroom at the correct time, but that such coordination is possible as long as people remain flexible and the lines of communication remain open. Another concern about sharing courtrooms was how the court would manage when judges have long trials. Judges noted that long trials present logistical challenges requiring substantial coordination and continuity, which could be difficult when sharing courtrooms. However, when the number of total trials is averaged across the total number of judges, each judge has approximately 15 trials per year, with the median trial lasting 1 or 2 days. Hence, it is highly unlikely that all judges in a courthouse will simultaneously have long trials. Also, a centralized sharing arrangement would allow for those who need a courtroom for multiple days to reserve one. To address panelists’ concern about sharing courtrooms between district and magistrate judges, which stems in part from differences in responsibilities that can affect courtroom design and could make formal courtroom sharing inappropriate, our model separated district and magistrate judges for sharing purposes, reducing the potential for sharing that could occur through cross scheduling in courthouses with both district and magistrate judges. In 2008 and 2009, the Judicial Conference adopted sharing policies for future courthouses under which senior district and magistrate judges will share courtrooms at a rate of two judges per courtroom plus one additional duty courtroom for courthouses with more than two magistrate judges. Additionally, the conference recognized the greater efficiencies available in courthouses with many courtrooms and recommended that in courthouses with more than ten district judges, district judges also share. Our model’s application of the judiciary’s data shows that more sharing opportunities are available. Specifically, sharing between district judges could be increased by one-third in all but the largest courthouses by having three district judges share two courtrooms in all-sized courthouses. Sharing between senior district judges could also be increased by having three senior judges—instead of two—share one courtroom. If implemented, these opportunities could further reduce the need for courtrooms, thereby decreasing the size of future courthouses. To date, the Judicial Conference has made no recommendations for bankruptcy judges to share courtrooms. However, the judiciary is conducting a study for bankruptcy courtrooms similar to the 2008 district court study and expects to complete it in 2010. While it is too late to reduce the extra space in the 33 courthouses constructed since 2000, for at least some of the 29 additional courthouse projects underway and for all future courthouse construction projects not yet begun, GSA and the judiciary have an opportunity to align their courthouse planning and construction with the judiciary’s real need for space. Such changes would greatly reduce construction, operations and maintenance, and rent costs. We have draft recommendations related to GSA’s oversight of courthouse construction projects and the judiciary’s planning and sharing of courtrooms that we plan to finalize in our forthcoming report after fully considering agency comments. Madam Chairwoman and members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you or the other Members of the subcommittee may have. If you or your staff have any questions concerning this report, please contact me on (202) 512-2834 or goldsteinm@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. GAO staff who made major contributions to this testimony include Tammy Conquest (Assistant Director), Keith Cunningham, Bess Eisenstadt, Brandon Haller, William Jenkins, Susan Michal-Smith, Steve Rabinowitz, Alwynne Wilbur, Jade Winfree, and Sarah Wood. For the 33 federal courthouses completed since 2000, we examined (1) whether the courthouses contain extra space and any costs related to it, (2) how the actual size of the courthouses compares with the congressionally authorized size, (3) how courthouse space based on the judiciary’s 10-year estimates of judges compares with the actual number of judges; and (4) whether the level of courtroom sharing supported by data from the judiciary’s 2008 study of district courtroom sharing could have changed the amount of space needed in these courthouses. The 33 courthouses in our scope included the courthouses in table 5. To meet all four objectives, for each of the 33 courthouses in our scope, we reviewed the site and design prospectuses, construction prospectus, and other relevant fact sheets and housing plans provided during the General Services Administration (GSA) to congressional authorizing committees to support the request, as well as the congressional authorizations provided at the construction phase of the project. To understand how much square footage is allocated to different types of courthouse space and the process for determining how much space is requested for a new courthouse, we reviewed the 1997 and 2007 editions of the judiciary’s Design Guide and examples of the judiciary’s space program model, AnyCourt, for those courthouse projects in our scope for which an AnyCourt model had been developed. We discussed verbally and in writing with GSA officials GSA’s and the judiciary’s processes for planning and constructing courthouses, and we requested and received written responses to questions related to the judiciary’s process for determining its space needs. We also reviewed prior GAO work on courthouse construction and rent paid by the judiciary to GSA, and we researched relevant laws. Furthermore, to inform all four objectives, we selected 7 federal courthouses in our scope to analyze more closely as case studies. We chose the 7 case studies because they provided examples of courthouses that are larger than congressionally authorized. In addition, we chose these sites to represent a wide distribution of courthouse sizes, dates of completion, and geographical locations. Our analysis of courthouse size and cost is based on data for all courthouses and major annexes completed from 2000 through March 2010. The information specifically from our site visits cannot be generalized to that population. These case studies included the following courthouses: (1) Bryant U.S. Courthouse Annex in Washington, D.C.; (2) Coyle U.S. Courthouse in Fresno, California; (3) D’Amato U.S. Courthouse in Central Islip, New York; (4) DeConcini U.S. Courthouse in Tucson, Arizona; (5) Eagleton U.S. Courthouse in St. Louis, Missouri; (6) Ferguson, Jr., U.S. Courthouse in Miami, Florida; and (7) Limbaugh, Sr., U.S. Courthouse in Cape Girardeau, Missouri. For these courthouses, we analyzed blueprints labeled with size and tenant allocations for each space, which we requested and received from GSA. For all of these courthouses except the DeConcini Courthouse in Tucson, we visited the courthouse, where we toured the facility and met with court officials, including judges, circuit executives, and others involved in planning for judicial space needs and requesting and using courthouse space; and we met with GSA officials involved in planning, constructing, and operating the courthouse. For the DeConcini Courthouse, we reviewed workpapers from a prior GAO engagement that included a December 2005 visit to the Tucson courthouse that involved a tour of the courthouse and discussions with court and GSA staff. During our meetings with court officials, we discussed issues pertaining to all four of our objectives, including the process for determining the size needed for the courthouse, the planning and construction of the courthouse, and the current uses of courthouse space, including courtrooms and chambers. We also sought the officials’ views on the potential for more than one judge to share a courtroom. In addition to these activities, we performed the following work related to each specific objective: To determine whether the courthouses contain extra space and any costs related to it, we added together any extra square footage due to an increase in the courthouse’s gross square footage over the congressional authorization, inaccurate judge estimates, and less sharing than is supported by the judiciary’s data, as described below in the methodology for the other objectives. We consider the sum of the extra space as calculated according to the method described in our discussion of the following objectives to be the extra space for each courthouse. We then discussed how to calculate an order of magnitude estimate for the cost of increasing a courthouse’s square footage with construction experts within GAO, at the Construction Institute of America, and at a private sector firm that specializes in developing cost estimates for the construction of buildings. Based on these conversations, we estimated the cost per square foot through the following method: To determine the total construction cost of each courthouse, we obtained from GSA the total net obligations, excluding claims, for each of the 33 courthouses through September 11, 2009, and determined that these data, which equal the total cost of each project as of September 11, 2009, were sufficiently reliable for our purposes through discussions with GSA officials and by reviewing information related to the reliability of these data from a previous GAO engagement. GSA officials told us that GSA could not break out the construction costs from the total costs of courthouse projects. Therefore, except for most annexes, we then subtracted from the total project costs the estimates GSA had provided for site, design, and management and inspection costs in its construction prospectuses to congressional authorizing committees. We consider the resulting figure to be an estimate for the total construction cost for each courthouse. We then calculated the construction cost per square foot by dividing the construction cost of each courthouse, as calculated above, by the gross square footage, as measured using GSA’s measurement program, ESmart, and reported by GSA, for each courthouse. For annex projects that involved substantial work on older buildings, we used a different method to determine the construction cost per square foot. GSA officials told us that for those annexes that involved substantial costs both to renovate an older building and to construct a new annex, they could not separate the costs of work done on the annex from the costs of any work done on the older building. Therefore, we used GSA’s estimated cost per square foot for constructing the annex, which was reported in the construction prospectus, as our figure for the construction cost per square foot. We then reduced the construction cost per square foot of each courthouse or annex by 10 percent based on discussions with construction experts to account for the economies of scale that cause the construction cost per square foot to decrease slightly in larger buildings. We removed the effect of inflation from the estimates by applying two sources of information on annual increases in construction costs—the Bureau of Economic Analysis’s Office Construction Series for years up through 2008 and the Global Insight Projections on Commercial Construction Costs for 2009 to the present based on each courthouse’s completion date. Then, we multiplied the sum of the extra square footage by the construction cost per square foot for each courthouse to estimate the total construction cost implications for each courthouse. To estimate the annual cost to rent or operate and maintain the extra space, we took the following steps. To the extent practical, we determined whether the cost of the extra space is directly passed on to the judiciary as rent. If the cost of the space is passed on to the judiciary as rent, such as for extra courtrooms, we calculated the annual rental costs for the space to the judiciary. To do so, we obtained information on the rent payments that the judiciary made to GSA for fiscal year 2009, which we determined was reliable for our purposes. Then, we multiplied the annual rent per square foot for each courthouse by any extra square footage. If the costs of the space are not directly passed on to the judiciary as rent (including the costs of all the extra space, if any, due to construction above the congressional authorization, which we did not attempt to allocate between the judiciary, other tenants, and GSA), we calculated the annual operations and maintenance costs of the space. To do so, we obtained from GSA the total operations and maintenance costs for each of the 33 courthouses for fiscal year 2009 and determined that these data were sufficiently reliable for our purposes. For each courthouse, we divided these costs by the actual gross square footage to come up with an operations and maintenance cost per square foot. We then multiplied the cost per square foot by any extra square feet. Finally, we summed the extra operations and maintenance costs with the extra rent costs for all 33 courthouses built since 2000. To determine how the actual size of the courthouses compares with the congressionally authorized size, we compared the congressionally authorized gross square footage of each courthouse with the gross square footage of the courthouse as measured by GSA’s space measurement program, ESmart. We determined that these data were sufficiently reliable for our purposes through discussions with GSA officials on practices and procedures for entering data into ESmart, including GSA’s efforts to ensure the reliability of these data. To determine the extent to which a courthouse that exceeded its authorized size by 10 percent or more had total project costs that exceeded the total project cost estimate provided to the congressional authorizing committees, we used the same information obtained from GSA on the total net obligations (i.e., total project costs), excluding claims, for each of these courthouses through September 11, 2009, as described above. We compared the total project cost for each courthouse to the total project cost estimate provided to the congressional authorizing committees in the construction prospectus or related fact sheets. We also examined GSA’s communications to the committees on appropriations for four courthouses that we found exceeded the authorized size and estimated total budget by about 10 percent or more. To increase our understanding of how and why courthouse size exceeds congressional authorized size, we reviewed GSA’s space measurement policy and guidance and discussed these documents with GSA officials. We also discussed the reasons that some courthouses are larger than congressionally authorized with GSA headquarters and regional officials and reviewed written comments on the size and space allocations for some of our case study courthouses. In addition, for two of the case study courthouses, we contracted with an engineer and architect to advise us on analyzing the extra space in these courthouses. To determine how courthouse space based on the judiciary’s 10-year estimates of number of judges compares with the actual number of judges, we used courthouse planning documents to determine how many judges the judiciary estimated it would have in each courthouse in 10 years. We then compared that estimate with the judiciary’s data showing how many judges are located there including authorized vacancies identified for specific courthouses and interviewed judiciary officials. We determined that these data were sufficiently reliable for our purposes. To determine the effects of any differences, we calculated how much excess space exists in courthouses that were estimated to have more judges than are currently seated there at least 10 years after the 10-year estimates were made. We also discussed challenges associated with accurately estimating the number of judges in a courthouse with judicial officials and analyzed judiciary data where available. To determine whether the level of courtroom sharing supported by data from the judiciary’s 2008 study of district courtroom sharing could have changed the amount of space needed in these courthouses, we also took the following steps: We created a simulation model to determine the level of courtroom sharing supported by the data. The data used to create the simulation model for courtroom usage were collected by the Federal Judicial Center (FJC)—the research arm of the federal judiciary—for its Report on the Usage of Federal District Court Courtrooms, published in 2008. The data collected by FJC were a stratified random sample of federal court districts to ensure a nationally representative sample of courthouses—that is, FJC sampled from small, medium, and large districts, as well as districts with low, medium, and high weighted filings. Altogether, there were 23 randomly selected districts and 3 case study districts, which included 91 courthouses, 602 courtrooms, and every circuit except that of the District of Columbia. The data sample was taken in 3-month increments over a 6-month period in 2007 for a total of 63 federal workdays, by trained court staff who recorded all courtroom usage, including scheduled but unused time. These data were then verified against three independently recorded sources of data about courtroom usage. Specifically, the sample data were compared with JS-10 data routinely recorded for courtroom events conducted by district judges, MJSTAR data routinely recorded for courtroom events conducted by magistrate judges, and data collected by independent observers in a randomly selected subset of districts in the sample. We verified that these methods were reliable and empirically sound for use in simulation modeling. To create a simulation model, we contracted for the services of a firm with expertise in discrete event simulations modeling. This engineering services and technology consulting firm uses advanced computer modeling and visualization as well as other techniques to maximize throughput, improve system flow, and reduce capital and operating expenses. Working with the contractor, we discussed assumptions made for the inputs of the model and verified the output with in-house data experts. We designed this sharing model in conjunction with a specialist in discrete event simulation and the company that designed the simulation software to ensure that the model conformed to generally accepted simulation modeling standards and was reasonable for the federal court system. The model was also verified with the creator of the software to ensure proper use and model specification. Simulation is widely used in modeling any system where there is competition for scarce resources. The goal of the model was to determine how many courtrooms are required for courtroom utilization rates similar to that recorded by FJC. This determination is based on data for all courtroom use time collected by FJC, including time when the courtroom was scheduled to be used but the event was cancelled within one week of the scheduled date. The completed model allows, for each courthouse, user input of the number and types of judges and courtrooms, and the output states whether the utilization of the courtrooms exceeds the availability of the courtrooms in the long run. When using the model to determine the level of sharing possible at each courthouse based on scheduled courtroom availability on weekdays from 8 a.m. to 6 p.m., we established a baseline of one courtroom per judge to the extent that this sharing level exists at the 33 courthouses built since 2000. Then we inputted the number of judges from each courthouse and determined the smallest number of courtrooms needed for no backlog in court proceedings. To understand judges’ views on the potential for, and problems associated with, courtroom sharing, we contracted with the National Academy of Sciences to convene a panel of judicial experts. This panel, which consisted of seven federal judges, three state judges, one judicial officer, one attorney, and one law professor and scholar, discussed the challenges and limitations to courtroom sharing. Not all panelists invited were able to attend the live panel, and these panelists were individually contacted and interviewed separately. We also conducted structured interviews either in person or via telephone with 14 federal judges, 1 court staff member, 1 state judge, 2 D.C. Superior Court judges, 1 lawyer, and 1 academic, during which we discussed issues related to the challenges and opportunities associated with courtroom sharing. Additionally, we used district courtroom scheduling and use data to model courtroom sharing scenarios. We determined that these courtroom data were sufficiently reliable for our purposes by analyzing the data, reviewing the data collection and validation methods, and interviewing staff who collected and analyzed the data. Besides the 7 courthouses we selected as case studies, we visited 2 district courthouses where courtroom sharing has been used—the Moynihan U.S. Courthouse in Manhattan, New York, and the Byrne U.S. Courthouse in Philadelphia, Pennsylvania. In addition, we visited the Roosevelt U.S. Courthouse Annex in Brooklyn, New York, as an example of a courthouse with a collegial floor plan. We conducted this performance audit from September 2008 to May 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal judiciary and the General Services Administration (GSA) are in the midst of a multibillion-dollar courthouse construction initiative, which began in the early 1990s and has since faced rising construction costs. As requested, for 33 federal courthouses completed since 2000, GAO examined (1) whether they contain extra space and any costs related to it, (2) how their actual size compares with the congressionally authorized size, (3) how their space based on the judiciary's 10-year estimates of judges compares with the actual number of judges, and (4) whether the level of courtroom sharing supported by the judiciary's data could have changed the amount of space needed in these courthouses. GAO analyzed courthouse planning and use data, visited courthouses, modeled courtroom sharing scenarios, and interviewed judges, GSA officials, and other experts. The findings in this testimony are preliminary because the federal judiciary and GSA are still in the process of commenting on GAO's draft report and did not provide comments on this testimony. The 33 federal courthouses completed since 2000 include 3.56 million square feet of extra space--28 percent of the total 12.76 million square feet constructed. The extra square footage consists of space that was constructed (1) above the congressionally authorized size, (2) due to overestimating the number of judges the courthouses would have, and (3) without planning for courtroom sharing among judges. Overall, this space represents about 9 average-sized courthouses. The estimated cost to construct this extra space, when adjusted to 2010 dollars, is $835 million, and the annual cost to rent, operate and maintain it is $51 million. Twenty seven of the 33 courthouses completed since 2000 exceed their congressionally authorized size by a total of 1.7 million square feet. Fifteen exceed their congressionally authorized size by more than 10 percent, and 12 of these 15 also had total project costs that exceeded the estimates provided to congressional committees--8 by less than 10 percent and 4 by 10 to 21 percent. There is no requirement to notify congressional committees about size overages, as is required for cost overages of more than 10 percent. A lack of oversight by GSA, including a lack of focus on not exceeding the congressionally authorized size, contributed to these size overages. The judiciary overestimated the number of judges that would be located in 23 of 28 courthouses whose space planning occurred at least 10 years ago, causing them to be larger and costlier than necessary. Overall, the judiciary has 119, or approximately 26 percent, fewer judges than the 461 it estimated it would have. This leaves the 23 courthouses with extra courtrooms and chamber suites that, together, total approximately 887,000 square feet. A variety of factors contributed to the judiciary's overestimates, including inaccurate caseload projections and long-standing difficulties in obtaining new authorizations. However, the degree to which inaccurate caseload projections contributed to inaccurate judge estimates cannot be measured because the judiciary did not retain the historic caseload projections used in planning the courthouses. Using the judiciary's data, GAO designed a model for courtroom sharing, which shows that there is enough unscheduled time for substantial courtroom sharing. Sharing could have reduced the number of courtrooms needed in courthouses built since 2000 by 126 courtrooms--about 40 percent of the total number--covering about 946,000 square feet. Some judges GAO consulted raised potential challenges to courtroom sharing, such as uncertainty about courtroom availability, but others indicated they overcame those challenges when necessary, and no trials were postponed. The judiciary has adopted policies for future sharing for senior and magistrate judges, but GAO's analysis shows that additional sharing opportunities are available. For example, GAO's courtroom sharing model shows that there is sufficient unscheduled time for 3 district judges to share 2 courtrooms and 3 senior judges to share 1 courtroom.
Before Hurricane Katrina, 16 acute care hospitals operated in the greater New Orleans area. These hospitals included public as well as private for- profit and not-for-profit facilities. Because of the hurricane and resulting flooding, 7 hospitals remained closed as of June 2006. (See table 1.) Charity and University hospitals are part of the statewide system of 10 public hospitals. Charity Hospital, which served as a Level I trauma center, was built in 1937. University Hospital was built in 1972. These hospitals served as the primary health care safety net for many local residents. About half of the patients served by these hospitals were uninsured, and about one-third were covered by Medicaid, the federal- state program for financing health care for certain low-income individuals. Charity and University hospitals served as a major state resource through training programs for professionals in medicine, nursing, dentistry, and public health. Charity and University hospitals are eligible for federal aid under the Public Assistance program managed by FEMA to help repair the damage caused by Hurricane Katrina. This program, authorized by the Stafford Act, provides grants to pay up to 90 percent of the costs of restoring a facility to predisaster condition. A facility is considered repairable when the cost of repairing disaster damages does not exceed 50 percent of the cost of replacing the facility and it is feasible to repair the facility so that it can perform the function for which it was being used as well as it did immediately prior to the disaster. Although initial grant obligations are based on FEMA’s estimate of the costs of repairs to restore the facility to its predisaster condition, reimbursements are based on actual, documented repair costs, which could be higher than the original estimate. Alternatively, if FEMA’s estimated repair costs exceed 50 percent of its estimated replacement costs, FEMA is authorized to grant up to 90 percent of its estimated replacement costs to replace a facility. There is a possibility for additional federal reimbursements under the Public Assistance program for required code upgrades that are triggered by the repairs. Code upgrades, although eligible for reimbursements, are not included in determining whether repair costs exceed 50 percent of replacement costs. In the event that FEMA’s estimated repair costs do not exceed 50 percent of its estimated replacement costs and a decision is made to replace rather than repair, funds authorized for repair may be used to build a new hospital, but reimbursements will be limited to 90 percent of FEMA’s estimated cost to repair and restore the original facility to its predisaster condition. In addition, projects for hazard mitigation to prevent damage in future flooding events are eligible for Public Assistance funding. HHS is the federal government’s principal agency for protecting the health of all Americans and providing essential human services. HHS’s Centers for Medicare & Medicaid Services (CMS) administers Medicare, which finances health care for elderly and certain disabled individuals, and Medicaid. In its support role for long-term community recovery and mitigation under the National Response Plan, HHS coordinates federal government health care support to state, regional, local, and tribal governments; nongovernmental organizations; and the private sector to enable community recovery, such as recovery from the long-term consequences of Hurricane Katrina and the subsequent flooding. In the greater New Orleans area, a sufficient number of staffed hospital inpatient beds existed for all types of care except psychiatric care; there was also a high demand for emergency department services. According to information we obtained from hospital officials, we determined that as of April 2006 the greater New Orleans area had more staffed beds per 1,000 population than the national average, and over two-thirds of these beds were within 5 miles of Charity and University hospitals. While hospitals were able to maintain a sufficient number of staffed beds, hospital officials also reported that recruiting, hiring, and retaining nurses and support staff, such as nursing aids, housekeepers, and food service workers, to staff the available beds constituted a great challenge. Eight of the nine hospitals that remained open after Hurricane Katrina reported a high demand for services in their emergency departments, not unlike emergency departments in other parts of the country, which are also experiencing high demand. According to information we obtained from hospital officials, we determined that as of April 2006, the greater New Orleans area had more staffed beds per 1,000 population than the national average. Before Hurricane Katrina, the population of the greater New Orleans area was about 1,002,000, with about 455,000 living within the city boundaries of New Orleans (Orleans Parish). The number of staffed hospital inpatient beds on hand to serve the people of the greater New Orleans area was 3,958, or about 4.0 staffed beds per 1,000 population, as compared with the national average of 2.8 staffed beds per 1,000 population reported in 2006. The population of the greater New Orleans area remains in flux and is difficult to estimate, in part due to former residents living outside the city and returning during the day and workers involved in reconstruction activities. PricewaterhouseCoopers estimated the February 2006 population of the four parishes (Orleans, Jefferson, Plaquemines, and St. Bernard) to be 578,000, and the Louisiana Department of Health and Hospitals reported estimates of about 569,000 for January 2006 and 588,000 for April 2006. In April 2006, the hospitals in the greater New Orleans area reported to us that they were able to staff 1,878 of the 2,328 available beds. Based on their reports and the April 2006 population estimate, we calculated the four parishes had 3.2 staffed beds per 1,000 population and 4.0 available beds per 1,000 population. About 69 percent of the available beds are within 5 miles of Charity and University hospitals, and about 91 percent are within 10 miles. Consequently, patients who live and work within Orleans Parish are close to hospital services. Figure 1 shows the location of all the hospitals in the greater New Orleans area, including the nine open hospitals we surveyed. Furthermore, hospital officials we surveyed told us that they planned to reopen additional staffed beds by the end of the year. For example, LSU plans to reopen 166 beds at University Hospital in late September or early October 2006 and an additional 224 beds by the end of the year for a total of 390 additional staffed beds. Tulane University Hospital and Clinic plans to reopen an additional 117 staffed beds by the end of 2006. In all, hospitals plan to reopen at least 674 staffed beds by the end of 2006. Given these plans, even if the population of the greater New Orleans area rises 30 percent by the end of 2006 over the estimated population as of April 2006, there would be about 3.3 staffed beds per 1,000 population. This estimate assumes that the estimated population of 588,000 in April 2006 would increase to 764,000 by December 2006. Furthermore, the population of the greater New Orleans area would have to increase by 325,000 or about 55 percent, to 913,000, by December 2006 before staffed beds per 1,000 population dropped to the national average of 2.8. Consistent with nationwide data on occupancy rates (occupied beds as a percentage of staffed beds), information we received on estimated occupancy rates from hospitals in the greater New Orleans area demonstrated wide month-to-month fluctuations. Nevertheless, these hospitals were able to meet the demand for inpatient care, with the exception, in many cases, of psychiatric care. Post-Hurricane Katrina hospital occupancy rates in the greater New Orleans area are higher than they were before the hurricane. For all types of care, eight of the nine hospitals we contacted provided us with an estimated overall occupancy rate for the 9-month period following the hurricane (through April 2006) and for the 12-month period before the hurricane. The hospitals’ occupancy rates for the 9-month period after the hurricane ranged from 45 percent to 100 percent, or an average of 77 percent, compared with a range from 33 percent to 85 percent, or an average of 70 percent, for the 12-month period before the hurricane. The American Hospital Association reported that the average monthly hospital occupancy rate nationwide was 67 percent in 2004, the most recent year for which nationwide data are available. We also obtained actual occupancy rate information from the nine greater New Orleans area hospitals for one day—April 25, 2006—and five of them provided actual daily occupancy rate information for the entire month of April 2006. The five hospitals reported actual occupancy rates that ranged from 70 percent to 89 percent (70, 75, 85, 86, and 89 percent). According to hospital officials, the greatest need was for medical/surgical care, adult critical care, and psychiatric care beds. For example, on April 25, 2006, the occupancy rate was 95 percent for medical/surgical care, 96 percent for adult critical care, and 100 percent for psychiatric care, compared with rates of 68 percent and 71 percent for obstetrics care and pediatrics care, respectively. (See table 2.) Hospital officials also told us that inpatient psychiatric care beds were frequently not available in the greater New Orleans area and that psychiatric patients were the only type of patients that had to be transferred out of the greater New Orleans area because of a lack of beds. For example, an official at one hospital reported that since Hurricane Katrina the demand for psychiatric services has overwhelmed that hospital’s 15-bed psychiatric unit, and the hospital has had to house up to eight psychiatric patients in the emergency department at one time until psychiatric beds could be found in other facilities. An official at another hospital reported that sometimes psychiatric patients have stayed in the emergency department for several days until an inpatient psychiatric bed could be found for them somewhere else in Louisiana. An official at a third facility stated that the facility’s case workers frequently spent all day calling other facilities in the state looking for an inpatient psychiatric bed. In one case, workers made 39 telephone calls before locating a facility that would accept the patient. Occupancy rates increased following Hurricane Katrina not only because of the loss of staffed beds but also because patients on average have been staying in the hospital longer. According to hospital officials, the average length of stay has increased by about one-half day because there is a shortage of facilities to which patients can be discharged, such as skilled nursing facilities and long-term care facilities. In addition, because of the extensive destruction of housing, many patients may not have appropriate housing to which they can return. According to a recent report prepared for the Louisiana Recovery Authority Support Foundation, a single-day increase in the average length of stay drives occupancy rates up about 15 percent. Hospital officials reported that recruiting, hiring, and retaining nurses and support staff, such as nursing aids, housekeepers, and food service workers, to staff the available beds constituted a great challenge. The officials told us that the demand for nurses was greater than the supply because (1) many nurses left the greater New Orleans area during and after the storm, (2) there was an insufficient supply of suitable housing for nurses, and (3) local nurses were being recruited by facilities outside the greater New Orleans area. According to officials, the hospitals have been able to reopen beds and keep them open by having employees work overtime and by paying higher salaries for permanent and temporary contract staff. However, a shortage of skilled workers remains. For example, an official at one hospital reported that the hospital had to temporarily suspend its open heart surgery program because of its inability to hire operating room nurses and technicians with experience in open heart surgery, even after offering a salary increase of over 30 percent. Officials also stated that competition from nonhospital employers for unskilled workers made it difficult for the hospitals to hire and retain them. For example, whereas the average hourly rate for food service workers was about $7 per hour before Hurricane Katrina, fast food restaurants are currently offering about $12 per hour, with one restaurant chain, for example, offering a signing bonus of about $6,000. The hospitals that remained open after Hurricane Katrina have reported a high demand for services in their emergency departments. Data reported by some of the hospitals showed that wait times for emergency medical service vehicles to move stable patients from the vehicle into the emergency department varied from no wait time at one hospital to almost 40 minutes at another hospital for the 30 days between March 28 and April 26, 2006. During the same 30-day period, four of these hospitals reported that their emergency departments were occasionally at capacity and therefore temporarily diverted patients to other facilities. The four emergency departments temporarily diverted patients 8 to 26 times; three of the departments reported being in diversionary status from 5 to 48 hours. Over this same period, officials from six of the nine hospitals also reported that an average of 7 patients per day had to be housed in the emergency department until a hospital bed was available after a decision had been made to admit them to the hospital. This ranged from 1 patient per day at one hospital to 18 patients per day at another hospital. By comparison, demand for emergency medical services in other parts of the country is also high. For example, the Institute of Medicine reported in June 2006 that emergency department crowding was a nationwide problem, with numbers of visits having grown by 26 percent from 1993 to 2003. The Institute of Medicine also reported that patients are often boarded in the emergency department for 48 hours or more until an inpatient bed became available. Furthermore, an April 2002 report conducted for the American Hospital Association found that officials at many hospitals in urban areas described their emergency departments as operating at or above capacity. In addition, we reported in March 2003 that because of a lack of inpatient beds about 2 in 10 of the 1,489 hospitals we surveyed temporarily diverted patients from their emergency department more than 10 percent of the time—or about 2.4 hours or more per day—and nearly 1 in 10 hospitals temporarily diverted patients from their emergency department more than 20 percent of the time—or about 5 hours per day. In our March 2003 report, hospital officials cited economic reasons for the lack of inpatient beds, including financial pressures and the inability to staff the available beds because of difficulty in recruiting nurses or the increased cost of hiring contract nurses. We also reported that for about 1 in 5 hospitals the average time that patients remained in the emergency department after a decision was made to admit them as inpatients or transfer them to other facilities was 8 hours or more. FEMA and LSU have prepared damage assessments and cost estimates for University and Charity hospitals. FEMA’s cost estimates for repairs at Charity and University hospitals are considerably lower than LSU’s estimates. While repairs are under way to reopen portions of University Hospital beginning this fall, as of July 2006, LSU had no plans to reopen Charity Hospital. Rather, LSU intends to pursue the possibility of building a new facility, in collaboration with VA. Meanwhile, LSU has established temporary facilities to provide some of the hospital functions previously provided by the two hospitals. For example, LSU established the MCLNO Emergency Services Unit, which is located in a former department store, and opened a trauma center at the Elmwood Medical Center. LSU’s cost estimates for repairing Charity and University hospitals are considerably higher than FEMA’s estimates. Shortly after Hurricane Katrina struck the greater New Orleans area, LSU hired ADAMS Management Services Corporation (ADAMS) to assess the condition of the two hospitals. In addition to identifying safety and health issues with respect to physical construction and deficiencies, ADAMS was tasked with recommending specific corrective measures, including cost estimates, to make it feasible to restore the hospitals to a usable condition. ADAMS completed its assessment in November 2005. According to the ADAMS assessment, Charity and University hospitals’ structural systems, such as columns, beams, and flooring, were in functional condition, although further testing would be required to verify this condition. However, the mechanical, electrical, and plumbing systems were beyond repair, and there were significant environmental safety problems. ADAMS estimated the repair costs at $257.7 million for Charity Hospital and $117.4 million for University Hospital. ADAMS also estimated replacement costs at $395.4 million for Charity Hospital and $171.7 million for University Hospital. On the basis of these estimates, ADAMS determined that repair costs exceeded 50 percent of the replacement costs for the two hospitals. As a result, LSU officials told us they believed that the hospitals met the Public Assistance program criteria for replacement funding and that LSU could obtain 90 percent of the estimated cost to replace Charity and University hospitals through the Public Assistance program. FEMA’s cost estimates for repairing the two hospitals, however, are considerably lower than LSU’s estimates. FEMA completed its initial damage assessment in December 2005. However, FEMA’s initial assessment did not include elevator repairs because the elevators were not accessible at that time. FEMA completed its assessment of the elevators in April 2006. Like the assessment ADAMS did for LSU, FEMA’s initial assessment found mechanical, electrical, and plumbing damage, among other things. FEMA estimated the repair costs, including the elevator repair costs, at $27 million for Charity Hospital and $13.4 million for University Hospital. FEMA also estimated replacement costs at $147.7 million to $267.3 million for Charity Hospital and $57.4 million to $103.9 million for University Hospital. From these estimates, FEMA determined that the repair costs did not exceed 50 percent of the replacement costs for the two hospitals. (See table 3 for a comparison of LSU’s and FEMA’s repair and replacement estimates.) Two significant factors contribute to the differences between LSU’s and FEMA’s cost estimates. First, LSU’s cost estimates cover whole building repair, meaning that they include costs for damage from Hurricane Katrina and many deficiencies that had been identified before the hurricane. For example, LSU’s estimates include costs for installing fire-rated doors and frames in all exit corridors throughout University Hospital, the lack of which was identified in 2003 as a problem that needed to be addressed. In contrast, FEMA’s estimates for Charity and University hospitals cover the repair costs for damage from flooding and wind only, since these are the only repair costs eligible for federal reimbursement under the Public Assistance program. Prior deficiencies are generally not eligible for reimbursement. Second, LSU’s estimates also included a 66 percent cost escalation over a commonly used index of labor and material for New Orleans. The cost escalation was meant to anticipate material and labor shortages over the next 3 to 6 years as a result of the hurricane. FEMA’s estimates, in contrast, did not include a cost escalation for labor and material. According to FEMA, three of the five bids for a recently awarded contract for the New Orleans Arena were below the federal government estimate. Based on those bids, FEMA concluded that a cost escalation for labor and material inflation was not justified. State officials disputed FEMA’s cost estimates of the hurricane damage to Charity and University hospitals. LSU maintained that these hospitals are not repairable, as defined by federal regulation. Specifically, LSU maintained that the cost of repairing the hospitals to their predisaster condition exceeded 50 percent of the cost of replacing the hospitals and that it was not feasible to repair the hospitals so that they could perform the functions for which they were being used immediately prior to the disaster. In a November 2005 letter to Vice Admiral Thad Allen, LSU noted that “It is not feasible to repair these facilities to restore the design, function, and capacity, as well as all required code and standard upgrades, at a reasonable cost.” LSU further suggested in the letter that FEMA’s estimated costs were too low, noting that FEMA’s estimates did not include all eligible expenses that might be incurred in completing the repairs, such as those associated with compliance with the Americans with Disabilities Act (ADA). For example, the ADAMS assessment includes accessibility upgrades to bring Charity and University hospitals into compliance with current ADA requirements, including upgrades to the restrooms, telephones, and drinking fountains. Officials from OFPC, which administers the design and construction of all Louisiana state-owned facilities damaged in Hurricane Katrina, also told us that FEMA’s estimates for the two hospitals were too low and did not reflect the current market conditions (i.e., the shortage of labor and material). Officials from both LSU and OFPC provided several examples of FEMA’s underestimating the costs of repairs for facilities in the greater New Orleans area. For example, FEMA estimated the costs for repair to the engineering building on the University of New Orleans campus at about $286,000. The contract was awarded for about $689,000. However, FEMA officials cautioned against using differences in estimated and actual repair costs for other facilities as benchmarks for comparing or adjusting the estimates for Charity and University hospitals, noting that each facility and its associated estimate are unique. To help reconcile FEMA’s and LSU’s cost estimates, FEMA officials suggested that LSU select a few projects at Charity Hospital and put them out for bid. According to FEMA officials, this process would provide actual repair costs and could serve as a baseline for adjusting LSU’s or FEMA’s estimates as needed. FEMA officials noted that some repair projects at Charity Hospital would be necessary even if LSU opted to replace, not repair, the facility. Officials from LSU and OFPC told us that they questioned whether this would be the best use of time and resources, however, especially since they said they did not believe that restoring Charity Hospital to its predisaster condition would adequately meet the health care needs of the community. However, a senior OFPC official told us that OFPC would evaluate whether some repairs were necessary to prevent further deterioration of the facility. FEMA has begun the process of obligating funds based on its assessments. As of June 16, 2006, FEMA had obligated about $21.5 million for repairs to Charity Hospital and $14.3 million for repairs to University Hospital. The funds are allocated to Louisiana’s Office of Homeland Security and Emergency Preparedness (i.e., the grantee), which then distributes the funds to LSU (i.e., the applicant) for reimbursement for the costs of repairing Charity and University hospitals. At the time of our visit in May 2006, repairs to University Hospital were under way, and portions of the facility were expected to reopen by late September or early October 2006, with the remainder of the facility expected to open by the end of the year. Initially, LSU officials had hoped to reopen a portion of the facility by the end of June 2006. However, according to LSU officials, estimates for reopening a portion of the facility in June—which assumed a 75-day construction schedule—were optimistic given the amount of repair work needed. An official from OFPC told us that several contractors estimated it would take 180 days to complete the work, which was more than 3 months longer than LSU requested. LSU and the winning contractor ultimately negotiated a 120-day construction schedule. According to this new schedule, LSU plans to reopen portions of University Hospital, including inpatient beds, a pharmacy, and a blood bank, in fall 2006. In addition, LSU plans to convert space on the first floor of the hospital for a Level I trauma center. This work is scheduled to be completed by the end of 2006. However, officials from LSU and OFPC stated that the schedule is subject to change, depending on the availability of resources and the ability of the contractor to complete the repair work on time. In addition, although LSU plans for University Hospital to be fully operational by the end of the year, a senior LSU official told us that LSU is pursuing the possibility of a new hospital that would allow it to close University Hospital in the future. According to this official, the building is near the end of its useful life. While repairs to University Hospital are under way, LSU currently has no plans to reopen Charity Hospital. Charity Hospital sustained significant damage as a result of Hurricane Katrina, in large part because of the flooding that occurred in the basement. In addition, according to officials from LSU and OFPC, the facility was antiquated prior to Hurricane Katrina and was not well suited for a modern acute care medical facility. As a result, LSU does not want to invest significant resources in repairing the facility and would prefer to invest available funding in constructing a replacement facility. If LSU decides to replace Charity Hospital, LSU is authorized under the Public Assistance program to use funds approved for repair, including the $21.5 million already obligated, on a replacement facility. However, the amount eligible for reimbursement cannot be greater than 90 percent of FEMA’s initial cost estimates for repairs. Prior to Hurricane Katrina, LSU had decided to support the construction of a new facility to replace both University and Charity hospitals, and it was seeking funding for the project when the storm occurred. LSU continues to support this option and has taken some initial steps, in collaboration with VA, to plan for a new facility. Like LSU’s Charity and University hospitals, VA’s New Orleans Medical Center sustained extensive damage as a result of Hurricane Katrina, and VA has determined that the existing facility is no longer suited for providing patient care. As a result, VA is also proposing to construct a new facility. LSU and VA formed the Collaborative Opportunities Study Group (COSG) to study options for constructing a new joint hospital facility. In its June 2006 report, COSG recommended a “collaborative complex”—that is, separate VA and LSU bed towers connected by a corridor that houses facilities and services used by both entities. According to the June report, a collaborative complex would be more cost-effective than LSU and VA operating stand-alone facilities. Following Hurricane Katrina, LSU established several temporary facilities in order to continue to meet the health care needs of the population currently in the greater New Orleans area and to continue to fulfill LSU’s mission of providing care to the uninsured. Two key temporary facilities are the MCLNO Emergency Services Unit and the trauma center at the Elmwood Medical Center. The MCLNO Emergency Services Unit is located in a former department store in downtown New Orleans. It was originally established in the parking lot of University Hospital in October 2005. The facility was moved to the Ernest N. Morial Convention Center in November 2005 and eventually to its current location in March 2006. According to LSU officials, the MCLNO Emergency Services Unit provides a variety of outpatient services, including minor emergency services, dental care, radiology services, and services for victims of sexual assault, among others. According to LSU officials, the facility is not equipped to provide major emergency services. In order to accommodate the services being provided, LSU set up cubicles and tents to serve as treatment rooms, storage, conference rooms, and offices. LSU plans to close the MCLNO Emergency Services Unit in October 2006, when University Hospital is reopened. LSU is also leasing space for a trauma center from the Ochsner Clinic Foundation at its Elmwood Medical Center. LSU opened the facility on April 24, 2006, to provide the trauma services previously provided at Charity Hospital. Charity Hospital served as the only Level I trauma center in the region. According to LSU officials, the trauma center at Elmwood Medical Center houses a blood bank, laboratory, pharmacy, and treatment rooms, among other things. In addition, computed tomography and magnetic resonance imaging services are provided in mobile trailers on the grounds of the facility. LSU’s lease for this space expires at the end of 2006. HHS officials said that the agency’s efforts to restore hospitals’ health care infrastructure following Hurricane Katrina included financial assistance, technical assistance, and waivers that allow exceptions to some program requirements. HHS financial assistance included two opportunities for hospitals to receive additional funds for infrastructure repair—SSBG that may be used to repair or rebuild health care facilities, and a Medicare extraordinary circumstances exception that allows damaged hospitals to receive payment for capital costs. SSBG funds generally cannot be used for construction; however, the Department of Defense, Emergency Supplemental Appropriations to Address Hurricanes in the Gulf of Mexico, and Pandemic Influenza Act, 2006, enacted December 30, 2005, specifically authorized the use of SSBG funds appropriated by that act for the repair, renovation, and construction of health facilities. The act appropriated an additional $550 million to the SSBG program, from which HHS designated about $221 million for Louisiana in February 2006. In addition, four applications were submitted to CMS for assistance to hospitals in the greater New Orleans area under the Medicare extraordinary circumstances exception, which provides additional payments for unanticipated capital expenditures that exceed $5 million (after taking into account proceeds from other sources, such as insurance or FEMA aid) and result from extraordinary circumstances, such as hurricanes. The provision does not provide a lump sum payment up front; instead, it allows eligible hospitals that serve Medicare patients to depreciate the cost of the unanticipated capital expenditures over the life of the asset, once repairs have been made. Charity and University hospitals (submitting a joint application), East Jefferson General Hospital, Tulane University Hospital and Clinic, and Ochsner Medical Center have applied for this funding. As part of the approval process, HHS requested that each hospital provide a plan and a schedule for submission of documents to support its exception request. As of June 8, 2006, only Charity and University hospitals had provided estimates of their expected capital expenditures, which they set at approximately $900 million, an HHS official said. HHS technical assistance to Louisiana related to restoration of the health care infrastructure includes both ongoing and planned technical assistance. Since Hurricane Katrina, HHS has assigned staff members to assist hospitals and other state and local entities in Louisiana in evaluating health care challenges and identifying available resources. For example, HHS staff members did the following: Provided consultation services at Orleans Parish health planning committee meetings that addressed shortages of staff, hospital beds, and funding. As a result, an immediate need for registered nurses was identified, and HHS, in coordination with VA, made arrangements for 12 to 20 registered VA nurses on 2- to 4-week rotations through mid-April 2006 to provide emergency room, medical-surgical, and intensive care unit services at Tulane University Hospital and Clinic. Conducted joint weekly teleconferences beginning in January 2006 with the Joint Commission on Accreditation of Healthcare Organizations, state survey agencies, and hospital and other health care providers to coordinate the application of accreditation standards for hospitals that were providing care in temporary facilities or in facilities damaged by the hurricanes. Facilitated meetings between St. Bernard Parish and a nonprofit medical center that led to the opening of a new primary and urgent care facility in April 2006 after the parish lost all its health care facilities during Hurricane Katrina. Additionally, since Hurricane Katrina, HHS officials have chaired two federal interagency working groups, the President’s Health Care: Chronic Care and Facilities Restoration Workgroup and HHS’s Gulf Coast Recovery Working Group. The President’s Health Care: Chronic Care and Facilities Restoration Workgroup produced two major working papers in 2006, a summary of the federal payments available for providing health care services and rebuilding health care infrastructure after Hurricane Katrina and a document that sets out guiding principles for the federal government in the rebuilding process. The federal payments summary served as the basis for two all-day interagency workshops in New Orleans on January 10, 2006, and February 9, 2006, sponsored by HHS and Louisiana, for local and regional health care providers and elected officials to identify information about available federal resources and to provide technical assistance in accessing them. While the President’s Health Care: Chronic Care and Facilities Restoration Workgroup has disbanded, many of its members have been included in meetings of the Gulf Coast Recovery Working Group. The Gulf Coast Recovery Working Group is an HHS staff- level group that meets regularly to resolve issues and offer advice on how to improve HHS programs supporting the recovery efforts. The Gulf Coast Recovery Working Group also began working with the Department of Homeland Security’s Office of the Federal Coordinator for Gulf Coast Rebuilding shortly after the office was established on November 1, 2005, by Executive Order 13390 to lead the federal response. The Gulf Coast Recovery Working Group reports to the HHS Secretary and provides input to, and coordinates on a policy level with, the Federal Coordinator. Planned technical assistance is part of a broader effort to redesign the entire continuum of Louisiana’s health care delivery system, from primary care clinics to the restoration of hospital inpatient care and emergency department services in the greater New Orleans area, HHS officials said. HHS plans to provide technical assistance to the Louisiana Healthcare Redesign Collaborative (Collaborative), a state and locally led effort to redesign the health care delivery system in Louisiana, including the existing hospital system. HHS’s Office of the Secretary expects to provide technical staff, guidance, and funds to support the redesign effort. In an address before the Louisiana state legislature on April 25, 2006, the Secretary of HHS committed to participating in the redesign effort but emphasized that the redesign effort must be locally led and governed according to guiding principles endorsed by all participants. A charter, signed July 17, 2006, places the Collaborative under the authority of the Louisiana Department of Health and Hospitals and includes guiding principles. To help coordinate technical assistance from HHS to the Collaborative, HHS has hired a full-time senior advisor to the Secretary of HHS and plans to provide part-time staff from across HHS agencies. HHS officials said that the agency expected to work with the Collaborative to develop a health care system recovery proposal that could include requests for Medicare demonstrations and Medicaid waivers. HHS officials said that they expected that the redesign effort would produce a more efficient and effective health care delivery system in Louisiana. HHS officials noted that prior to Hurricane Katrina, Louisiana had one of the most expensive health care systems in the United States, but that it generally ranked close to the bottom among states in terms of health care quality indicators. The Secretary of HHS has waived or modified various statutory and regulatory requirements to assist hospitals and other health care providers in states in which he had declared a public health emergency. For example, certain Medicare billing and other requirements were waived or modified to accelerate Medicare payments in the hurricane-affected states, including Louisiana. Under the waivers, HHS has paid hospitals the inpatient acute care rate for Medicare patients that remained in a hospital but no longer required acute level care, until the patient could be discharged to an appropriate facility; relaxed the data requirements to substantiate payment to the provider when a facility’s records were destroyed; allowed hospitals to have a responsible physician (e.g., the chief of medical staff or department head) sign an attestation of services provided when the attending physician could not be located; and instructed its payment processing contractors to immediately process requests for accelerated payments for health care providers, including hospitals, affected by the hurricane. In addition, after HHS received inquiries concerning whether hospitals could provide free office space, low interest or no interest loans, or other arrangements to assist physicians displaced by Hurricane Katrina, the Secretary permitted CMS to waive sanctions for violations of the physician self-referral prohibition, known as the Stark Law, through January 31, 2006. This time-limited relief concerns statutory prohibitions against a physician referring Medicare patients to an entity with which the physician or a member of the physician’s immediate family has a financial relationship. HHS officials said that a waiver had been approved for one hospital in the greater New Orleans area for one physician. HHS officials said that few HHS programs or activities are designed to help address the restoration of hospital inpatient care and emergency department services in the greater New Orleans area. The department does not have broad authority to respond to the needs of hospitals affected by a disaster, HHS officials said. They cited several issues that limit the agency’s ability to provide this type of assistance. First, agency officials emphasized that HHS’s role in financing health care services does not easily translate into providing restoration assistance after a disaster. Second, HHS must consider whether proposed responses to problems identified in the greater New Orleans area could adversely affect other areas of the country. For example, Louisiana has requested that HHS adjust the wage index used in determining Medicare prospective payments to hospitals to account for the higher wages that must be paid to attract or maintain health care workers, including nurses and physicians, in the greater New Orleans area. However, HHS officials said that by law, changes to the wage index must be “budget neutral.” Practically, this means that if the wage index is increased for the greater New Orleans area, then the wage index must be decreased for another area, HHS officials said. We sent a draft of this report for comment to DHS, HHS, VA, and the State of Louisiana. Excerpts from it were also sent to LSU for comment. HHS agreed with the draft report, and its comments are included as appendix II. VA informed us by e-mail that it agreed with the draft report. DHS also responded by email and informed us that it had no formal comments on the draft report. DHS, HHS, and VA also provided technical comments, as did Louisiana’s Department of Health and Hospitals through an e-mail response. We considered all technical comments and incorporated those that were appropriate. LSU did not provide comments. We are sending copies of this report to the Secretaries of Homeland Security, Health and Human Services, and Veterans Affairs and other interested parties. We will also make copies available to others on request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. If you or your staffs have any questions about this report, please contact Cynthia Bascetta at (202) 512-7101 or bascettac@gao.gov for issues related to health services. Please contact Terrell G. Dorn at (202) 512-6923 or dornt@gao.gov for issues related to medical facilities and FEMA. GAO staff members who made significant contributions to this report are listed in appendix III. To examine the availability of hospital inpatient care and the demand for emergency department services, we contacted nine operating public and private hospitals in the greater New Orleans area. We randomly selected one day—April 25, 2006—and asked hospital officials to provide information on the number of available, staffed, and occupied beds for that day, by type of patients served, such as critical care, medical and surgical, and pediatrics. We later asked for the number of available, staffed, and occupied beds for the entire month of April; however, only five hospitals responded to this request. From the hospital officials we also obtained estimates of the occupancy rates for the 12-month period prior to, and the 9-month period following, Hurricane Katrina for 8 of the 9 open hospitals. We weighted the estimated hospital occupancy rates by the number of staffed beds to obtain a weighed average. Further, we asked about plans to open more beds and about emergency department services provided for the 30-day period from March 28, 2006, through April 26, 2006. We conducted telephone interviews with senior officials from seven of the nine hospitals to clarify information provided in their written responses to our survey. We did not independently verify the data the hospitals provided on bed availability and the amount of emergency care provided. To determine the April 2006 population of the four parishes in the greater New Orleans area, we used estimates from the Louisiana Department of Health and Hospitals Bureau of Primary Care and Rural Health, which used two methodologies to estimate the population in each of the parishes. It used school enrollment data for Jefferson, St. Bernard, and Plaquemines parishes; and for Orleans Parish it used a survey of persons occupying residential structures. The survey had been conducted by the New Orleans Health Department in consultation with the Centers for Disease Control and Prevention. We limited our work to examining the status of hospital inpatient and emergency departments in the greater New Orleans area and did not examine other aspects of hospital services, such as outpatient services or the financial condition of the hospitals. We also did not address other issues related to the health care system, such as the status of primary care, medical research, or graduate medical education. To examine the Federal Emergency Management Agency (FEMA) and Louisiana State University (LSU) efforts to reopen Charity and University hospitals, we reviewed LSU and FEMA damage assessments and cost estimates for the facilities, FEMA regulations and guidance, and the Department of Veterans Affairs’ (VA) damage assessment of its medical center in New Orleans. We toured Charity and University hospitals and the temporary facilities LSU has established to provide hospital outpatient care and emergency department services. We interviewed officials from FEMA; LSU (including LSU’s Health Care Services Division that manages the public hospitals in the greater New Orleans area); VA because it is considering building a joint hospital complex with LSU in New Orleans; the Louisiana Recovery Authority because it is the planning and coordinating body that was created in the aftermath of Hurricane Katrina by the Governor of Louisiana to plan for recovery and rebuilding efforts; and Louisiana’s Office of Facility Planning and Control because it is administering the design and construction of all Louisiana state-owned facilities damaged by Hurricane Katrina. We did not independently verify the damage assessments prepared by FEMA and LSU. We limited our review to the efforts to restore state-owned public hospital facilities. To determine the activities that the Department of Health and Human Services (HHS) has undertaken to help hospitals recover in the greater New Orleans area, we interviewed officials in various HHS agencies, including officials in the Centers for Medicare & Medicaid Services headquarters and Dallas and Atlanta regional offices, the Health Resources and Services Administration, the Administration for Children and Families, and the Office of Public Health Emergency Preparedness. Additionally, we reviewed documents and summaries outlining HHS programs and activities related to helping restore hospital inpatient care and emergency department services after a disaster. Finally, we reviewed applicable federal law and regulations. We conducted our work from April 2006 through September 2006 in accordance with generally accepted government auditing standards. In addition to the contacts named above, key contributors to this report were Michael T. Blair, Jr., Assistant Director; Nikki Clowers, Assistant Director; Karen Doran, Assistant Director; Jonathan Ban; Michaela Brown; Nancy Lueke; Roseanne Price; and Cherie Starck.
In the aftermath of Hurricane Katrina, questions remain concerning the availability of hospital inpatient care and emergency department services in the greater New Orleans area--which consists of Jefferson, Orleans, Plaquemines, and St. Bernard parishes. Because of broad-based congressional interest, GAO, under the Comptroller General's statutory authority to conduct evaluations, assessed efforts to restore the area's hospitals by the Department of Homeland Security's (DHS) Federal Emergency Management Agency (FEMA); the Department of Health and Human Services (HHS); and the Louisiana State University (LSU) public hospital system, which operated Charity and University hospitals in New Orleans. GAO examined (1) the availability of hospital inpatient care and the demand for emergency department services, (2) steps taken to reopen Charity and University hospitals, and (3) the activities that HHS has undertaken to help hospitals recover. To fulfill these objectives, GAO reviewed documents and interviewed federal officials and hospital, state, and local officials in the greater New Orleans area. GAO also obtained information on the number of inpatient beds for April 2006, which was the most recent data available when GAO did its work. GAO's work did not include other issues related to hospitals such as outpatient services or financial condition. While New Orleans continues to face a range of health care challenges, hospital officials in the greater New Orleans area reported in April 2006 that a sufficient number of staffed inpatient beds existed for all services except for psychiatric care--some psychiatric patients had to be transferred out of the area because of a lack of beds. Overall, GAO determined that the area had about 3.2 staffed beds per 1,000 population, compared with a national average of 2.8 staffed beds per 1,000 population. Hospital officials told GAO they planned to open an additional 674 staffed beds by the end of 2006, although they reported that recruiting, hiring, and retaining nurses and support staff was a great challenge. With these additional beds, the population would have to increase from 588,000 in April 2006 to 913,000 by December 2006 before staffed beds would drop to the national average. Hospitals also reported a high demand for emergency services, consistent with a June 2006 Institute of Medicine report, which found that emergency department crowding is a nationwide problem. Steps have been taken to reopen University Hospital, but as of July 2006, LSU had no plans to reopen Charity Hospital. LSU plans to open portions of University Hospital in fall 2006 and would like to replace both hospitals with a new one. LSU and FEMA have prepared cost estimates to repair these hospitals. For Charity Hospital, FEMA's estimate of $27 million is much lower than LSU's estimate of $258 million, which covers, for example, repairing hurricane damage and correcting many prestorm deficiencies. In contrast, FEMA's estimate covers repairs for hurricane damage only--the only repair costs eligible for federal reimbursement. HHS provided financial assistance and waived certain program requirements to help hospitals recover in the area. For example, HHS included $221 million in hurricane relief funds designated for Louisiana through Social Services Block Grants, which may be used in part to reconstruct health care facilities. HHS also waived certain Medicare billing and other requirements and accelerated Medicare payments to providers, including hospitals, in the hurricane-affected states. Rebuilding the health care infrastructure of the greater New Orleans area will depend on many factors, including the health care needs of the population that returns to the city and the state's vision for its future health care system. In light of the current sufficiency of hospital beds for most inpatient services, GAO believes a major challenge facing the greater New Orleans area is attracting and retaining enough nurses and support staff. HHS and the Department of Veterans Affairs (VA) agreed with the draft report. DHS said it had no formal comments on the draft. HHS, VA, DHS, and Louisiana's Department of Health and Hospitals provided technical comments, which GAO incorporated where appropriate. LSU did not provide comments.
ERISA and the IRC require administrators of pension and welfare benefit plans (collectively referred to as employee benefit plans) to file annual reports concerning, among other things, the financial condition and operation of plans. Labor, IRS, and PBGC jointly developed the Form 5500 so that plan administrators can satisfy this annual reporting requirement. The requirements for completing the form vary according to the type of plan. If a company sponsors more than one plan, it must file a Form 5500 for each plan. Additionally, ERISA and the IRC provide for the assessment or imposition of penalties by Labor and IRS for plan sponsors not submitting the required information when due. Form 5500 Reports are shared among Labor’s Employee Benefits Security Administration (EBSA), IRS, PBGC, and the Social Security Administration (SSA), and each agency uses the Form 5500 to meet its statutory obligations. EBSA is responsible for the administration and enforcement of ERISA, and its primary purpose is to protect the pension, health, and other benefits of participants in private sector employee benefit plans. IRS oversees the tax code provisions of the law. PBGC is a federal government corporation that guarantees the payments of pension plan benefits to participants in the event that covered defined benefit pension plans terminate while underfunded. SSA is responsible for notifying each new Social Security or Medicare claimant for whom it has pension benefit information. Form 5500 Reports are also made available to other federal agencies and researchers through Labor. Once the forms for a given plan year are processed by EFAST they are available for enforcement and public disclosure purposes. In addition, after the forms are edited by Labor, the information is compiled into a database of usable computerized Form 5500 information, known as the research file, which includes information from all plans with over 100 participants and a 5 percent sample of all of the smaller plans. The research file is used by various federal agencies and pension researchers for conducting policy research and developing government statistics. Beginning with plan year 1999, EBSA assumed the administrative responsibility for accepting all Form 5500 filings, electronic and otherwise, which had previously been filed with IRS. As part of the switch, Labor, IRS and PBGC adopted EFAST, which was designed to expedite the receipt and processing of Form 5500 filings by relying on paper forms and electronic filing technologies. Collectively, all three agencies have authority to mandate the electronic filing of the Form 5500. There are various types of Form 5500 filers. Filers are classified as either single-employer plans, multiemployer plans, multiple-employer plans, or direct filing entities (DFEs). In general, a separate Form 5500 must be filed for each plan or DFE. Single-employer plans are plans that are maintained by one employer or employee organization. Multiemployer plans are established pursuant to collectively bargained pension agreements negotiated between labor unions representing employees and two or more employers, and are generally jointly administered by trustees from both labor and management. Multiple-employer plans are plans maintained by more than one employer and are typically established without collective bargaining agreements. DFEs are trusts, accounts, and other investment or insurance arrangements that plans participate in and that are required to or allowed to file the Form 5500. Filers have a normal deadline of 210 days after the end of the plan year to submit their Form 5500 Reports. For example, under the filing deadlines for plan year 2001, a calendar year filer must file its Form 5500 by July 31, 2002. However, a non-calendar year plan, for example, that has a plan year that runs from October 1, 2001 to September 30, 2002 would have until April 30, 2003 to file its Form 5500. When the Form 5500 was first developed, nearly 30 years ago, more participants were covered by defined benefit plans than by defined contribution plans. As shown in figure 1, in 2000, defined contribution plans had about 62 million participants, while defined benefit plans had about 41 million participants. As shown in figure 2, as of 1997, assets held by defined contribution plans exceed those held by defined benefit plans. As shown in figure 3, as of 2000, employers sponsored over 687,000 defined contribution plans compared with about 49,000 defined benefit plans. Unlike defined contribution plans, where benefits are based on investment returns on individual accounts, benefits provided by defined benefit pension plans are partially insured by PBGC. In the case of a single- employer defined benefit plan, PBGC guarantees benefits when an underfunded plan terminates. For multiemployer defined benefit pension plans, the agency guarantees benefits when a plan becomes insolvent, which is when a plan’s available resources are not sufficient to pay the level of benefits at PBGC’s guaranteed level. PBGC’s insurance programs and its operations are financed through premiums paid annually by plan sponsors, investment returns on PBGC assets, assets acquired from terminated single-employer plans, and recoveries from employers responsible for underfunded terminated single-employer plans. Premium revenue totaled about $1.485 billion in 2004, of which $1.458 billion was paid into the single-employer insurance program and $27 million paid to the multiemployer insurance program. This is the highest premium revenue PBGC has ever received. In contrast, in 2004 PBGC paid $3.007 billion in benefit payments and provided over $10 million in financial assistance to insolvent multiemployer pension plans. The termination of several large, underfunded defined benefit pension plans of bankrupt firms in troubled industries has worsened the single- employer program’s net financial position. After fluctuating over the last decade, the single-employer insurance program now has a large and growing accumulated deficit and has moved from a $9.7 billion accumulated surplus in 2000 to a $23.3 billion accumulated deficit in 2004. Additionally, the agency’s multiemployer insurance program has a current deficit of $236 million. Because of the decline in the financial condition of the single-employer program, GAO placed it on its high-risk list of programs with significant vulnerabilities to the federal government. Detailed information on private pension plans is reported on the Form 5500 and is used by Labor, IRS, and PBGC for compliance, research, and public disclosure purposes. Each agency uses data from Form 5500 Reports primarily as a means to identify actual and potential violations of ERISA and the IRC, as well as for research and policy formulation. Other federal agencies, private sector entities, and researchers also use Form 5500 data in assessing employee benefit, tax, and economic trends and policies. The Form 5500 is also made widely available to the general public. The Form 5500 is used to collect important information about the financial health and operation of private pension plans. Similar to the structure of an income tax form, the Form 5500 has multiple parts. As shown in table 1, there is the Form 5500 and its 12 schedules. The main part of the form provides basic information to identify the plan and type of plan. The form’s schedules provide more specific information about the plan, such as financial information, actuarial information, and insurance information. Form 5500 schedules are used to collect more in-depth information, including data on assets, liabilities, insurance, and financial transactions. These schedules can be separated into two distinct groups: those that contain financial information on the plan and those that contain information on the benefits that the plan expects to pay out. For example, the Schedule H is a key financial schedule and includes an accountant’s report along with an audited financial statement of the plan’s operations. Information from the financial schedules helps to provide a picture of a plan’s financial condition, while the benefit schedules collect information on the contributions to and distributions made from the plan in the current and future years. Information collected on the benefit schedules helps to provide a picture of the pension plan’s benefits and benefit promises. Different sizes and types of plans must meet different requirements. (See table 2.) For example, small defined benefit and defined contribution plans must file a Schedule I rather than the more detailed Schedule H required for a large pension plan. Additionally, unlike defined contribution plans, defined benefit plans are required to file Schedule B, including the signature of an Enrolled Actuary attesting to the completeness, accuracy, and reasonableness of the actuarial calculations, along with an attachment of any clarifying material not reported on the schedule itself. Labor, IRS, and PBGC use Form 5500 Reports as a compliance tool to identify actual and potential violations of ERISA and the IRC. Each agency has a unique statutory responsibility and uses the information on the form for monitoring and enforcement purposes. Agency officials said that each agency has developed computerized systems that analyze the reported information to help them ensure that plans are in compliance with applicable laws. Although Labor officials said that the most effective source of leads on violations of ERISA, such as delinquent participant contributions, were complaints from plan participants, computer searches and targeting of Form 5500 information on specific types of plans account for approximately 25 percent of case openings. Labor is currently using plan year 2002 and 2003 Form 5500 information for computer targeting. Labor officials told us that they open about 4,000 investigations into actual and potential ERISA violations annually. Labor officials said an early step when opening an investigation is to review the available Form 5500 Reports to identify names and contact information for the plan, its corporate sponsor, and its plan administrator. Labor officials said they use Form 5500 data to enforce ERISA reporting and disclosure provisions and fiduciary standards. IRS officials told us that they use Form 5500 data to examine plan financial transactions and to target plans for examination. Pension law provides significant tax benefits for sponsors of certain retirement plans and the employees that participate in them. IRS enforces certain minimum funding requirements of ERISA and the IRC. IRS officials said the purpose of IRS examinations is to ensure plan sponsors are making contributions to the plan as required, the assets truly exist to satisfy the liabilities and are classified properly, and that plans are operating in accordance with plan design. IRS can levy penalties, taxes, and interest charges as well as completely disqualify a tax-exempt plan from tax-exempt status if major violations are found. In fiscal year 2004, IRS examined more than 10,700 plans and 91 percent of these examinations were based solely on Form 5500 information, and an additional 5 percent were based in part on Form 5500 information. IRS uses its Returns and Inventory Classification System (RICS) to select plans for review based on Form 5500 Reports. For example, after IRS had determined that a pension practitioner was involved with tax-abusive schemes, it used available contact information listed on the Form 5500 to create a list of over 400 sponsors who had filed their Form 5500 Reports using the address of the practitioner who allegedly designed these schemes. PBGC uses Form 5500 information to monitor both single-employer and multiemployer defined benefit pension plan activities, focusing on assets, liabilities, number of participants, and funding levels. Form 5500 information is also used to forecast PBGC’s potential liabilities. PBGC’s data on multiemployer plans currently come only from Form 5500 Reports, while single-employer plan data are supplemented with information obtained from other filings and submissions with the government and from corporate annual returns. PBGC officials said the agency is particularly interested in single-employer and multiemployer plans with financial problems. For both types of plans, PBGC officials said they maintain a database of financial information about such plans drawn from Form 5500 Reports, premium filings, and other data in order to determine which plans may be at risk of requiring PBGC financial assistance. PBGC officials also said they use the Form 5500 for participant notice and PBGC insurance premium compliance. For example, PBGC reviews Form 5500 filings to ensure that plan sponsors of underfunded plans report sending required participant notices to plan participants notifying them of the plan’s funding status and the limits of PBGC’s guarantee benefits. If a participant notice is not issued as required, agency officials said they may assess penalties. PBGC officials also said they use Form 5500 information on plan type and level of underfunding to help ensure that plans are making the appropriate premium payments (which vary by type of plan and the extent of underfunding). They also said the agency has an “intercept program” arrangement with the EFAST processor. Through this program, PBGC has identified over 2,000 plans it is most interested in and has made arrangements for copies of these Form 5500 filings to be mailed to PBGC before being processed. Form 5500 information is also used for research and statistical purposes. Labor and PBGC officials told us that Form 5500 information is an integral part of their policy research. Labor officials said that EBSA’s Office of Policy and Research (OPR) uses Form 5500 information to assist in developing regulations and to prepare its Private Pension Plan Bulletin. OPR also uses Form 5500 information to develop aggregate pension statistics and conduct economic research on relevant topics. OPR officials said they plan and administer an employee benefits research and economic analysis program to support EBSA policy and program priorities, respond to requests for data and findings, and provide technical assistance to EBSA offices, other Labor agencies, and external groups. Officials from PBGC’s Policy, Research, and Analysis Department said they use Form 5500 data to develop policies for PBGC’s insurance programs and conduct related research and modeling. SSA is also a direct recipient of Form 5500 Reports. SSA officials said they receive information on name and address changes for plan administrators and information on mergers from pension plans. Plans file the Schedule SSA if they have vested participants who separated from the plan during the prior reporting period. SSA officials said they use the data to notify those participants or their survivors who apply for Social Security that they may have benefits from one or more private pension plans. The Form 5500 is also a source of information that is used by other federal agencies. In our discussions with federal agency officials, we found they use Form 5500 information for government research and preparing government statistics. For example, some federal agencies use the information in assessing employee benefits and taxes, determining economic trends, and evaluating policies. As shown in table 3, different federal agencies use Form 5500 information for different purposes. Finally, others outside of the federal government use Form 5500 information. Pension researchers told us they use Form 5500 information to determine employer contributions to defined benefit plans, employer pension costs for defined contribution plans, and data on the relationship between collective bargaining and pensions. Additionally, researchers said they have used information from the Form 5500 to determine the extent of cash balance defined benefit plans. Benefit consulting firms also use Form 5500 information. Consultants from one firm told us they use Form 5500 information for a variety of client-sponsored projects such as studying the time it takes an active participant to become vested and comparing single- employer with multiemployer pension plans. Others said that they repackage and sell information from Labor’s Form 5500 data after editing it and verifying contact information for large plans. The Form 5500 is also an important public disclosure document. The public disclosure of the form is a Labor function required by ERISA. According to Labor officials, the form is the only source of detailed financial information available to plan participants and beneficiaries, who upon written request must be furnished a copy of the plan’s latest Form 5500 by the plan administrator. Moreover, the form serves as a basis for the Summary Annual Report (SAR), which plan administrators are generally required to furnish to each participant and beneficiary annually. Labor also maintains a public disclosure room so that Form 5500 Reports and related plan information are available to public agencies, private organizations and individuals for review. Labor officials said in fiscal year 2004, EBSA’s Public Disclosure Office received about 1,800 requests for Form 5500 Reports and provided about 5,200 documents in response to these requests. Labor officials said making the form publicly available is intended to serve as a deterrent to non-compliance with the statutory duties imposed on plan fiduciaries. EBSA also makes its Form 5500 research file available in electronic format to individuals and groups for research purposes. In addition, separate from the research file, an electronic database of all available publicly disclosable filings is made available in response to Freedom of Information Act requests. Information from Form 5500 Reports is also made available through private parties. For example, electronic facsimiles of publicly available Form 5500 filings can be obtained free of charge at FreeErisa.com. Statutory reporting requirements, EFAST processing issues, and current Labor practices delay the release of Form 5500 information for up to 3 years in some cases. Current statutory requirements allow plan sponsors up to 285 days following the end of their plan year to file their Form 5500 Reports. Once the reports are filed, processing of the reports is slowed by some of EFAST’s procedures. Labor’s practice of not releasing the research file–Form 5500 information in its most practical form–until it has processed all forms from a plan year results in further delays. Agency officials told us that the timeliness of Form 5500 Reports affects their use of the information. The length of time plan sponsors have to file their Form 5500 Reports is determined by the statutory reporting requirements. Under ERISA, plan sponsors have a normal deadline of 210 days after the end of the plan year to file and may then apply to IRS for an annual automatic one-time 2 ½ month extension. Thus, plan sponsors can take up to 285 days from the end of the plan year to file their Form 5500 reports. For example, as shown in figure 4, for a calendar year plan that ends on December 31, a plan sponsor has until July 31 to file the Form 5500. If the plan sponsor requests an extension, the new deadline would be October 15. An additional 45-day extension from the normal statutory deadline is also automatically given to corporations receiving an extension on their federal income taxes. However, an extension granted by using this automatic extension procedure cannot be extended further by using the one-time 2 ½ month extension. Labor, IRS, and PBGC may also grant special extensions of time, beyond the 285 day extended deadline, for events such as presidentially declared disasters or for service in, or in support of, the armed forces of the United States in a combat zone. The current statutory filing requirements are also intertwined with other statutory deadlines relating to private pension plans (See fig. 5). For example, under ERISA, certain providers, such as insurance companies and financial institutions, have 120 days after the plan year to provide information to the plan administrator. Then, under IRC and ERISA, defined benefit plans have up to 8 ½ months after the plan year end to make contributions for minimum funding purposes. Finally, under Treasury regulations, a plan has up to 9 ½ months after the end of the plan year to correct any coverage or nondiscrimination violations, which enables any corrections to be made and timely reflected on Schedule T of Form 5500. Service providers and plan sponsor representatives said that the 210 day time frame with extensions is necessary, given the amount of coordination with other parties that is needed to prepare the form and the obstacles that exist. Plan sponsors are ultimately responsible for filing the Form 5500. However, industry association representatives told us that many plan sponsors are relying heavily on service providers to help them prepare the form. In addition, there are numerous parties that must provide information to the plan sponsor or service provider in order to complete the Form 5500. For example: Financial institutions provide information on plan assets held in their custody. Insurance companies provide information about any benefits provided through or investments made with them, including commissions and fees paid by the plan sponsor for the year. Actuaries are responsible for preparing the Schedule B and attesting that the information and any assumptions being presented are both reasonable and represent the best estimates of anticipated experience under the plan. Auditors are required to review plan financial statements as well as any books or records of the plan that they deem necessary. This review enables them to form an opinion as to whether the financial statements and the schedules provided as part of the Form 5500 are presented fairly according to generally accepted accounting principles. They also provide an opinion as to whether the schedules present the information about the plan fairly when examined in conjunction with the financial statements as a whole. Figure 6 shows an example of the coordination and information flow that must occur for service providers or plan sponsors to obtain information necessary to complete the Form 5500. Parties involved in filling out the Form 5500 told us they face obstacles that limit the timeliness of form preparation. For example, some officials said that plan sponsors are busy preparing their corporate taxes, closing their books for the year end, and preparing appropriate SEC filings during the first quarter of the year and that these things cause them to be unable to provide information for Form 5500 preparation until March or April at the earliest. Service providers, who often prepare the Form 5500 on behalf of plan sponsors, told us that gathering information from many different parties creates numerous obstacles that can delay preparation. Service providers said that it can be difficult to receive timely information from insurance companies, which is needed to complete the Schedule A. Service providers also said that receiving complete census data from plan sponsors can be difficult and often leads to delays in form preparation because of such problems as merging information from different databases, dealing with non-computerized retiree data, and identifying vested participants who have left the company. The data collection and the analysis of census data are further complicated when companies go through mergers, acquisitions, or divestitures, which can result in further delays. In addition, service providers said that many plan sponsors have outsourced their payroll function, which means that they have to get data from another party, which adds additional time. Actuaries said they face certain obstacles that can affect the timeliness of Schedule B preparation for defined benefit plans. These officials said the biggest delay is due to funding rules that allow plan sponsors to make contributions up to 8 ½ months following the close of a plan year. Actuaries said they need to know all of the contributions that have been made in order to certify Schedule B of the form. In addition, actuaries said they must wait for plan sponsors to give them information such as asset valuations, which can take a long time to prepare; as a result, they are generally unable to begin preparing the Schedule B until May or June at the earliest for calendar year plans. In general, actuaries said that once they have all the information they need, it typically takes them up to 2 months to complete the Schedule B. Audits are typically the last step in the preparation of the Form 5500 and can hold up submission of the form in many cases. Auditors said that scheduling a pension plan audit is often delayed because auditors are busy performing corporate year-end audit work and preparing corporate tax filings, and therefore they lack the time and resources to begin auditing pension plans until after April at the earliest. Officials from the larger auditing firms said that once they start working on pension plan audits, corporate work still takes precedence and if issues relating to a corporate audit arise, the pension plan audit will be put on hold. In addition, depending on any issues uncovered during the pension plan audit, auditors said they may need to go back to the plan sponsors, service providers, actuaries, or even the insurance companies and financial institutions to seek clarification or additional information. Auditors also said that this back and forth can be very time consuming, and sometimes small issues can hold up an entire audit. Once the audit is completed, it is typically sent back to the service provider, and then the completed Form 5500 Report is signed by the plan sponsor and submitted to EFAST for processing. Figure 7 shows an example of the preparation timeline for all the parties involved in providing information to the service providers in order to prepare Form 5500 Reports, as well as the other requirements that the various parties must meet during this time frame. All of the service providers, actuaries, and auditors we talked to said that given all the various commitments of the parties involved in preparing the Form 5500, it would be very difficult to shorten the Form 5500 filing deadline. Even given the current time frame, filings can get held up past the deadline and sponsors may be forced to file late. For example, if the actuarial report is not prepared in time to finish the plan audit by the October 15 deadline, a plan will have no choice but to file late or file an incomplete filing. According to statistics provided by Labor, 11 percent of all filers in 2001 filed late. The submission of numerous paper filings and certain EFAST processes limit the timeliness of Form 5500 report processing. Labor officials reported that the EFAST system processes approximately 25 million paper pages annually and that 98 percent of filers used paper forms in 2001, the most recent year for which data are available; this figure is consistent with prior years. EFAST officials said that under the current system, all filings are sent by the filers to a central processing facility in Lawrence, Kansas, operated by an outside contractor. Paper filings, once received and properly sorted, are scanned using advanced data capture software, and in some cases must be entered manually if the software is unable to process the form. After the forms are processed and scanned, they are run through edit checks and any errors are corrected. When the processing of the form is considered final, meaning any necessary corrections have been made, the information from the form is posted to the EFAST database. From there the information is then distributed to Labor, IRS, and PBGC on digital media. According to Labor officials, paper filings take more than three times as long as electronic filings to process and have nearly twice as many errors. As shown in figure 8, the abundance of paper filings results in long processing times, which delay the availability of the forms to the agencies. According to Labor officials and the ERISA Advisory Council’s working paper on electronic reporting, the electronic filing option of the current EFAST system has been underutilized by plan sponsors largely because of the fact that electronic filing is entirely voluntary. In addition, service providers told us there are some obstacles to electronic filing. First, they said the current process of obtaining an electronic signature and personal identification number (PIN) is burdensome and time consuming. For example, in order to receive a PIN, a plan sponsor must file a paper application with Labor, a process that takes from 3 to 4 weeks. Second, plan sponsors also reported that currently there is little economic benefit to filing electronically because purchasing the software needed for electronic filing can be more costly than generating paper filings, with no corresponding benefit. Third parties such as actuaries and accountants must sign certain portions of the Form 5500 filings, which complicates the electronic filing process. These officials said they want to ensure that any information developed by them and attributed to them is not changed or altered after it leaves their control. Labor and others have made attempts to address these issues. In 2002, the ERISA Advisory Council issued a report recommending the use of Web-based technologies and requiring that Form 5500 Reports be filed electronically. Resolving errors on Form 5500 filings, another paper-based process, can add up to 120 days to the processing of a form. EFAST officials said that whether a form is submitted in a paper format or electronically, the process for resolving errors or problems is paper based. We found that the EFAST system locates errors only after a form has been processed and seeks to resolve the error by mailing letters to plan sponsors. Labor will send up to two letters to receive clarification, providing plan sponsors up to 30 days to respond to each letter. In addition, Labor officials estimate that it takes roughly 30 days for mailing and processing, thereby adding up to 60 days in total for each letter to the overall processing time of Form 5500 Reports. Once two letters are sent, a filing is marked complete whether a resolution was achieved or not. As shown in figure 9, a Form 5500 Report can be initially processed by January 15, but if there are errors it may not be completed until May 14. Labor officials said they initiated EFAST with the hope of achieving certain advantages provided by an electronic system, including better dissemination of information to the public, better access to data for regulatory agencies, and availability of more current data for participants and beneficiaries. Currently, Labor is looking into a new system to replace EFAST when its contract ends. The new system would build on the gains achieved through EFAST, utilizing Web-based technologies and mandatory electronic filing, as recommended by the 2002 ERISA Advisory Council Working Group report on electronic filing. We found that currently Labor waits until EFAST has processed all filings for a plan year before finalizing work on the Form 5500 research file— Form 5500 information in its most practical form for producing aggregate statistics and conducting policy research. Labor officials said that waiting for all processing to be completed allows Labor to be more accurate and not be forced to use estimates for information in the research file. Under EFAST, the processing cycle for a plan year lasts 2 years to account for all types of filings, including non-calendar year plan filings. Since non- calendar year plan filings can be due up to a year later than calendar year plan filings, the research file is often not available to end users until about 3 years after the end of the plan year. For example, in plan year 2001, 74 percent of all filings were calendar year plans, and for those plans that were filed on time, processing under the EFAST contract standards was to be completed by May 13, 2003. Labor began work on the 2001 research file in mid-2004. The long delay in releasing the research file results in a lack of timely information on the current state of pension plans for policy makers and researchers. The need for adjustments to the EFAST system and the switch to an outside contractor, Actuarial Research Corporation (ARC), to prepare the research file have also delayed the release of the research file. Officials from ARC told us that part of the recent delays in releasing the 1999 Form 5500 research file is that the switch to EFAST in 1998 resulted in changes to the way that the data are collected and therefore new processes were required to develop the research file. In addition, Labor has included new variables that are not in the raw dataset, adding more time. Plan year 2000 marks the first year that the research file will be produced by ARC; previously the file was produced within Labor. Because of the long delay in releasing the 1999 research file, ARC has gotten off to a delayed start on subsequent years. ARC officials said that they are currently working on the 2000, 2001, and 2002 research files. ARC officials also said that there is a significant learning curve associated with preparing the research file, and therefore they expect the time frame needed to prepare the research file to be shorter in the future. They estimated that once the processes for developing the file are in place, it should take roughly 4 months to produce a preliminary version. As shown in figure 10, they began work on the 2000 research file in early 2004 and expect to release it in the summer of 2005. Although Labor, IRS and, PBGC have access to the Form 5500 information sooner than other federal agencies and the general public, the agencies are affected by the long processing times for paper filings and EFAST’s paper- based correction process. Each agency receives processed Form 5500 information on individual filings on a regular basis once a form is completely processed, which means that any necessary corrections have been made. As stipulated in the EFAST contract, IRS and PBGC receive weekly updates of processed Form 5500 information, while Labor and SSA receive updated information on a monthly basis. These agencies are also able to view images of the forms immediately after being scanned by EFAST. However, agency officials told us that as with the release of the Form 5500 research file, they still have to wait for a sufficiently complete universe of plan filings from any given plan year to be processed in order to begin their compliance targeting programs. Federal agency officials said that old Form 5500 information may paint a distorted picture of the current financial condition of defined benefit pension plans. The value of plan assets can change significantly over a period of time, and the value of plan liabilities can also change because of changes in interest rates, plan amendments, layoffs, early retirements, and other factors. For plans that experience a rapid deterioration in their financial condition, the funding measures may not reveal the true extent of a plan’s financial distress to relevant federal agencies and plan participants. Federal agency officials also said that it would be useful to have certain Form 5500 information reported prior to the lengthy Form 5500 filing deadline. For example, Labor, IRS and PBGC officials told us that Form 5500 Schedule B information, including information about a defined benefit pension plan’s funding status, is outdated by the time it is filed. As a result, these agencies are not notified of a plan’s funding status until almost 2 years after the actual valuation date. These officials said this makes the Form 5500 an unreliable tool for determining a plan’s current funding financial condition. They also told us other information could be reported earlier than the filing deadline, including Schedule H and I information, which would provide them with more timely plan financial information, including plan assets and liabilities. Labor, IRS, and PBGC officials told us that because of the timeliness of the information received, their ability to carry out various statutory responsibilities is hampered. Labor officials said that, in some cases, untimely Form 5500 Reports affects their ability to identify financially troubled plans whose sponsors may be on the verge of going out of business and abandoning their pension plans, because these plans may no longer exist by the time that Labor receives the processed filing or is able to determine that no Form 5500 was filed by those sponsors. IRS officials said the timeliness of Form 5500 Reports also affects their enforcement efforts, because the IRS has a 3 year statute of limitations. These officials said that working with older Form 5500 information raises the time and cost required to complete an investigation because retrieving the required information becomes more difficult with each passing year. Finally, the timeliness of Form 5500 reporting affects PBGC’s ability to monitor multiemployer plans. PBGC officials said that it is a challenge to get current information on the stability of defined benefit pension plans, especially multiemployer plans, because of the unavailability of current Form 5500 data. Multiemployer plan data come only from Form 5500 Reports and are much less current and complete than single-employer plan data–such data are generally 2 to 3 years older. According to PBGC officials, a major reason for this is that PBGC can identify the corporate sponsor of a single-employer plan from the Form 5500 and is often able to obtain financial information from the sponsor’s corporate 10-K filing. They said obtaining such data is not possible for multiemployer plans because participating corporate employers cannot be identified from Form 5500 information. Officials from other federal agencies that use Form 5500 information also told us that the information is not current, a fact that affects their ability to use the information to conduct program activities, inform policy makers, and evaluate the condition of the private pension plan universe. Some federal agency officials told us that they would develop modeling programs to explore more uses of Form 5500 information if it were available in a timelier manner. Labor, IRS, and PBGC have taken steps to improve the content of the Form 5500, including reviewing the form annually and revising the content as needed to ensure that the form is collecting all required information while not overburdening plan sponsors. Despite the content changes that have been made, the Form 5500, in its current form, lacks key information that could better assist Labor, IRS, and PBGC in tracking and identifying plans from year to year and monitoring multiemployer plans. In addition, federal agency officials and researchers that use Form 5500 information said the form has not kept pace with changes in the private pension universe. Although federal agency officials and others said the form lacks certain information, pension practitioners and service providers told us that it could be further streamlined by removing certain items and consolidating schedules. Labor, IRS, and PBGC annually review and revise Form 5500 content as needed to ensure that the form is collecting all information required under ERISA. These agencies conduct a review of the Form 5500 as part of the process by which they publish updated versions of the form and its instructions on an annual basis. The agencies receive public input throughout the course of the year from interested parties, such as plan sponsors and service providers, either asking questions about the form or suggesting areas where the instructions can be improved. Federal agency officials told us that these questions and comments are taken into account as part of the annual process of reviewing the Form 5500. Agency officials said the process of revising the form, which can include adding or removing items, can be triggered by a number of events, such as a statutory requirement to change the form or a requirement for agencies to collect certain information. Revisions to the form can also result from recommendations from entities such as the ERISA Advisory Council. After the triggering event, if the respective agency deems that a change is appropriate, it starts the process of developing the proposed change. The proposed change then goes through an approval, public comment, and clearance process at the agency level and the Office of Management and Budget (OMB). The process provides the general public and federal agencies an opportunity to comment on the proposed changes as well as helping to ensure, among other things, that any additional information can be reported in a way that minimizes respondent reporting burden (time and financial resources). The process to change the Form 5500 can take anywhere from 1 to 2 years, depending on the nature of the revisions. Efforts to minimize plan sponsors’ reporting burden may limit the collection of Form 5500 information. Legislation requires OMB to review forms before they are used to collect data. The Paperwork Reduction Act of 1995 (Pub. L. No. 104-13) and similar previous legislation are designed to minimize the paperwork burden on the public while at the same time recognizing the importance of information to the successful completion of agency missions. The act requires OMB to approve all existing and new collections of information by federal agencies. In approving agency collection efforts, OMB must weigh the burden to the public against the practical utility of the information to the agency. Revisions to the Form 5500 can also include eliminating duplicate or obsolete items. Agency officials said that they were reluctant to propose additional Form 5500 data collection unless they could clearly establish that the benefit outweighed the perceived burden. They also said that efforts to reduce existing data collection requirements sometimes result in a loss of information. Over the years Labor, IRS, and PBGC have made revisions to the Form 5500. The last major revision occurred in 1999, as part of a multiyear project, and followed Labor, IRS, and PBGC’s evaluation of public comments on their 1997 proposal from employer groups, employee representatives, financial institutions, service providers, and others. This resulted in these three agencies, in an effort to streamline the form, replacing the Form 5500, Form 5500-C, and Form 5500-R with one Form 5500 (the current form) to be used by all filers as well as more detailed schedules customized to each filer’s type of plan. In addition, the revisions eliminated duplicate or obsolete items. Since 1999, other annual revisions have included clarifying the Form 5500, its schedules, and instructions; adding items on Employee Stock Ownership Plans, frozen plans, and floor offset plans; removing items concerning delinquent participant contributions and fringe benefit plans; and changing the small plan audit requirements. Other changes have been proposed that relate to information associated with the Form 5500. In January 2005, the Secretary of Labor announced the Administration’s proposal to improve retirement security. The proposal presented three areas of change, one of them to increase the disclosure of information about private, single-employer defined benefit pension plans to workers, investors, and regulators. The proposal would increase disclosure in four ways: (1) reporting ongoing and at-risk liability on the Form 5500, (2) shortening the deadline for the Schedule B report of the actuarial statement, (3) publicly disclosing Section 4010 information, and (4) expanding the information reported on the SAR. The Form 5500 lacks key information that could better assist Labor, IRS, and PBGC in monitoring plans and ensuring that they are in compliance with the law. Federal agency officials and pension researchers acknowledge that the form does not collect certain information, such as information that could help them to better track plans from year to year, and certain information on multiemployer plans and defined contribution plans. Labor, IRS, and PBGC officials said that they have experienced difficulties when relying on Form 5500 information to identify and track all plans across years. Although these agencies have a process in place to identify and track plans filing a Form 5500 from year to year, problems still arise when plans change employer identification numbers (EIN) and/or plan numbers. Currently, Labor, IRS, and PBGC use the EINs and plan numbers listed on the form to identify and track individual plans from one year to the next. However, officials from these agencies reported they are having problems using EINs and plan numbers to consistently and accurately track all plans because many employers have numerous plans and each plan files Form 5500 Reports using the same EIN. As a result, only the three-digit plan number assigned by the plan administrator uniquely identifies plan filings that have identical EINs. However, when plan administrators do not file their Form 5500 with the same plan number each year, absent a unique EIN, it is difficult for federal agencies to track the same plan from year to year. Identifying plans is further complicated when plan sponsors are acquired, sold, or merged. In these cases, agency officials said that there is an increased possibility of mismatching of EINs, plans, and their identifying information. Agency officials also told us that without a reliable way to identify and track plans a number of problems occur. For example, Labor officials said they are unable to (1) verify if all required employers are meeting the statutory requirement to file a Form 5500 annually, (2) identity all late filers, and (3) assess and collect penalties from all plans that fail to file or are late. IRS officials said that EINs reported on the Form 5500 do not always match EINs listed on a corporate tax return of a business; this makes it difficult for IRS to individually match businesses’ Form 5500 Reports with their corporate tax returns. PBGC officials said they must spend additional time each year trying to identify and track certain defined benefit pension plans so that they can conduct their compliance and research activities. Furthermore, other federal agencies and researchers said that the inability to identify and track plans limits their ability to effectively identify all of the pension plans associated with a particular company, track changes over time in certain types of pension plans, and match Form 5500 information with other data sources. Labor, IRS, and PBGC officials said they are considering measures to better track and identify plans but have not reached any conclusions. We were also told that the Form 5500 lacks certain information on multiemployer plans that would enable PBGC, other federal regulators, and pension researchers to (1) identify all of the participating employers in a particular multiemployer plan; (2) determine a multiemployer plan’s basis for making contributions; and (3) determine the amount of unfunded liabilities attributable to each participating employer. Currently, the form does not collect information that identifies the employers participating in a particular multiemployer plan. Thus, PBGC and other regulators are unable to identify all the employers upon whose financial health multiemployer plans depend or link the financial health of these employers to the condition of the particular multiemployer plans that these employers are participating in. PBGC officials said they are unable to gauge the full impact that events such as employer bankruptcies, withdrawals, and labor strikes would have on multiemployer plans, their participants, and the agency’s multiemployer insurance program, which they emphasized as important, given that with multiemployer plans, an employer’s pension liabilities can be affected by the financial health of other employers in the plan. The form also lacks information that shows a multiemployer plan’s basis for employer contributions, which means that PBGC cannot determine the impact that events, such as labor strikes, would have on an employer’s ability to make plan contributions and its effect on the financial condition of that particular plan. Finally, the Form 5500 does not capture information on each participating employer’s responsibility for unfunded liabilities. Thus, PBGC is unable to assess the financial risk to an insured multiemployer plan posed by the financial collapse or withdrawal of one or more contributing employers, which PBGC officials said is an important piece of information because of its role in monitoring multiemployer plans for financial problems, providing financial and technical assistance to troubled plans and guaranteeing a minimum level of benefits to participants in insolvent multiemployer plans. PBGC officials said that the agency needs relevant information on multiemployer plans to fully assess the financial health of and potential risks faced by multiemployer plans, and they said that this information is currently lacking on the Form 5500. PBGC officials also said they are exploring ways to obtain more useful information on multiemployer plans. However, their plans are still in the developmental stages. In addition, officials from Labor, IRS, and other federal agencies and pension researchers said it would be useful if the Form 5500 captured more information on multiemployer plans. Federal and private sector researchers said the Form 5500 has not kept pace with changes in the private pension universe, where defined contribution plans have become the more prevalent type of private pension plan offered by employers and more employees are increasingly being covered by defined contribution plans. They said the Form 5500 is geared more toward defined benefit plans rather than toward defined contribution plans and suggested that the form could collect detailed information on the range of investment options that are available to participants (employer securities and mutual funds), 401(k) plan matching contributions, employee contribution limits, as well as more detailed information on the asset allocations of pooled accounts. They also said that the form could collect better information to determine the true cost of administering a defined contribution plan and 401(k) plan fees. For example, the ERISA Advisory Council Working Group recently reported that the Form 5500, as currently structured, does not reflect the way that the defined contribution plan fee structure works. The Advisory Council concluded that Form 5500s filed by defined contribution plans are of little use to policy makers, government enforcement personnel, plan sponsors, and participants in terms of understanding the cost of a plan. The Advisory Council also recommended that Labor modify the Form 5500 and the accompanying schedules so that total fees incurred either directly or indirectly by these plans can be reported or estimated. This information could be used for research or regulatory purposes. In addition to having more information on defined contribution plans, federal and private sector researchers also said that it would be useful if information reported on Section 4010 filings, such as information about the ability of a defined benefit plan to meet its obligations to participants if the plan were to be terminated, were captured on the Form 5500. Section 4010 filings (named after the ERISA section that requires companies to submit such reports) also include proprietary information about the plan sponsor and its pension assets. However, this information is available only to PBGC and by law may not be publicly disclosed. Some officials told us that participants should be provided with the necessary information, including Section 4010 data, to inform them when their plan is underfunded and when the sponsor’s financial condition may impair the ability of the company to fund or maintain the plans. Despite federal agencies’ attempts to streamline the form, pension practitioners and service providers said that the Form 5500 can be further streamlined by removing duplicate items and consolidating certain schedules. Pension practitioners and service providers told us that opportunities exist to modify and consolidate certain financial schedules and provided us with recommendations that in their opinion would better capture relevant information about pension plans for the federal government, participants, plan sponsors, and pension practitioners, as shown in table 4. However, Labor, IRS, and PBGC officials told us that, to some extent, they use all of the information reported on the Form 5500. In addition, pension researchers told us that removing certain information from the form, such as plan financial information, may limit their ability to use the form for research and statistical purposes. The Form 5500 is the primary source of information available concerning the operation, funding, assets, liabilities, and investments of private pension plans. Because these data are important to enforcement of federal pension laws and to pension policy development, it is important that Form 5500 information be timely and useful. Changes in the private pension world illustrate why improvements to the Form 5500 and its processing are so important. For example, the private pension environment has been changing fundamentally in the types of plans offered today’s workers, yet little has been done to reflect these changes in the types of data collected. In addition, the sudden deterioration in funding levels for some large defined benefit plans has brought financial pressures to PBGC and led to calls for comprehensive reforms, but Form 5500 data are not timely enough to help policy makers in developing effective responses. Untimely pension plan information forces policy makers to make key pension policy decisions based on data that are about 3 years old. It also hampers regulators’ ability to enforce ERISA and other laws and results in users getting an outdated picture of the financial condition of the private pension plan environment. Although Labor has made significant progress in implementing EFAST, more should be done to reduce the time it takes to process and release usable computerized Form 5500 information. Changes to the current system, such as utilizing its electronic filing capabilities and improving its paper-based correspondence process, could speed up the processing of Form 5500 Reports and provide more timely data for all users. Alternatively, certain types of information could be reported earlier than the current filing deadline, such as information on a plan’s funding status, which could also provide regulators with more timely information. Content issues also remain a problem, despite Labor’s, IRS’s, and PBGC’s periodic revisions to the form. Information currently collected on the form, while useful to some extent, does not permit these agencies to be in the best position to ensure compliance with federal laws and accurately assess the financial condition of private pension plans. Given the increase in the number of defined contribution pension plans and the need for relevant information on multiemployer plans, providing better information on these plans would help policy makers and others make informed decisions about the financial risks posed by private pension plans. However, any improvement to the content of the Form 5500 must be done in such a way that does not pose an undue burden on plan sponsors. Given the improved timeliness and reduced errors associated with electronic filing, Labor, IRS, and PBGC should require the electronic filing of the Form 5500. In doing so, Labor should also make improvements to the current electronic filing process to make it less burdensome, such as revising the procedure for signing and authenticating an electronic filing. To improve timeliness, reduce errors, and maximize efficiency, Labor should modify its current EFAST processing methods. In doing so, the following steps should be considered: Labor should streamline its data correction processes by ensuring that filings are checked for errors before they are accepted for processing by the EFAST system. It should develop an electronic-based correspondence process, whereby the agencies can notify filers of errors electronically, thereby eliminating the 30 days that officials at Labor estimate it takes to mail the paper-based correspondence back and forth. Also this will allow for filers to be notified of errors on a more timely basis. Considering the need for federal agencies, Congress, and the public to have access to timely and usable Form 5500 information as soon as possible, we recommend that Labor evaluate ways to speed up the release of its research file, including considering making information available from the file on an interim basis prior to its completion and final release to the public. To more effectively identify and track individual plans across years, especially when plans change EINs and plan numbers, and to take into account Labor’s need to be able to verify if all required employers are meeting the statutory requirement to file a Form 5500 annually, we recommend that Labor, IRS, and PBGC work collectively to better identify and track the same plan from one year to the next. To improve the federal government’s ability to regulate multiemployer defined benefit pension plans and improve participant information, we recommend that Labor, IRS, and PBGC modify the Form 5500 to collect additional information on multiemployer pension plans that would enable Labor, IRS, and PBGC to monitor and manage potential risks associated with events such as employer bankruptcies, withdrawals, and labor strikes and the attendant consequences for these plans, plan participants, and PBGC’s multiemployer insurance program. In doing so, Labor, IRS, and PBGC should consider requiring multiemployer plans to report the following information on the Form 5500: information sufficient to identify all of the employers associated with a particular plan and their annual contributions to the plan, plan specifics on determining employer contributions (per hour, per unit of output, etc.), and the distribution by employer of responsibility for unfunded or underfunded plan liabilities. We provided a draft of this report to Labor, PBGC, IRS, and SSA. Labor, PBGC, and IRS provided written comments, which appear in appendix I, appendix II, and appendix III. Labor’s, PBGC’s and IRS’s comments generally agree with the findings and conclusions of our report. Labor, PBGC, IRS, and SSA also provided technical comments on the draft. We incorporated each agency’s comments as appropriate. Labor suggested that we clarify our assertion that the Form 5500 could collect information that could help agencies better identify and track plans across years. Labor stated that the Agencies currently apply computerized “entity control” tests to Form 5500 filings as part of the EFAST processing system that are designed to track individual plans and determine if consistent identifying data are reported each year for a particular filer in order to maintain accurate year-to-year records for each filer. We have clarified that section of our report by noting that the agencies’ have a system in place to identify and track plans and the shortcomings of this method. However, although a system is in place to track plans from year to year, officials from all the principal agencies said that it is very difficult to track plans from year to year if plans change EINs and plan numbers. Labor and PBGC suggested that the section of our report on the timeliness of available Form 5500 information further clarify that generally individual Form 5500 filings are made available for enforcement and for public disclosure as soon as they are processed by EFAST. We agree and did note that in the appropriate section of the report. We have also revised our report in various sections to state that Form 5500 information is available for enforcement and public disclosure purposes prior to the release of the Form 5500 research file. PBGC proposed an additional recommendation regarding the timeliness of defined benefit pension plan funding information reported on Form 5500 Schedule B. The Administration’s pension reform proposal includes a provision that would advance the reporting date for the Schedule B to February 15 for certain large defined benefit plans. We agree with PBGC that advancing the reporting date for the Schedule B to provide more timely information on such plans to Labor, IRS, and PBGC could be an important piece of comprehensive pension reform. IRS specifically stated that, with electronic filing, EFAST can validate filings as received, reject filings with errors or incomplete responses, and minimize or eliminate error correction using electronic correspondence. IRS also stated its support for our recommendations to require the electronic filing of Form 5500 Reports, evaluate ways to better identify and track plans from year to year, and to modify the Form 5500 to collect additional information on multiemployer pension plans. We are sending copies of this report to the Secretary of Labor, the Commissioner of Internal Revenue, the Executive Director of the Pension Benefit Guaranty Corporation, the Commissioner of Social Security, appropriate congressional committees, and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report please contact me at (202) 512-7215 or Tamara Cross at (202) 512-4890. Other contacts and acknowledgments are listed in appendix IV. In addition to those named above, Joseph Applebaum, Richard Burkard, Scott Heacock, Gene Kuehneman, Michael Maslowski, Robert Parker, Roger J. Thomas, and Gail Vallierers made important contributions to this report.
The Form 5500 is the primary source of information for both the federal government and the private sector regarding the operation, funding, assets, and investments of private pension and other employee benefit plans. Currently, the Department of Labor (Labor) requires about 3 years to provide certain usable Form 5500 information to the public, leading to complaints that the information is not timely. We have prepared this report under the Comptroller General's authority, and it is intended to assist Congress in improving the timeliness and content of Form 5500 information. This report is addressed to the congressional committees of jurisdiction. It examines: (1) the information reported on the form and how it is used, (2) factors that affect the timeliness of Form 5500 information, and (3) issues affecting the content of the form. Detailed information on private pension plans is reported on the Form 5500, and Labor, the Internal Revenue Service (IRS), and the Pension Benefit Guaranty Corporation (PBGC) use the information for compliance, research, and public disclosure purposes. Information collected on the form includes basic plan identifying information as well as detailed information including assets and liabilities, insurance, and financial transactions. The principal users of Form 5500 Reports--Labor, IRS, and PBGC--use the reports primarily as a compliance tool to identify actual and potential violations of the Employee Retirement Income Security Act of 1974 and the Internal Revenue Code. Other federal agencies and policy researchers also use Form 5500 information. Statutory reporting requirements, processing issues, and current Labor practices affect the timeliness of the release of Form 5500 information, resulting in a 3 year lag, in some cases, in releasing certain usable computerized Form 5500 information to the non-principal federal agencies and others. First, under the current statutory reporting requirements, filers can have up to 285 days after the end of the plan year to file their Form 5500. Second, 98 percent of filings are in a paper format. These take more than three times as long as electronic filings to process and have twice as many errors. Third, the release of the Form 5500 information in the research file--the Form 5500's most practical form--is further delayed because Labor waits until all filings for that plan year are processed, which can take up to 2 years. Despite the efforts of Labor, IRS, and PBGC to improve its content, the Form 5500 lacks key information. These agencies have taken certain steps to improve the content of the Form 5500, such as reviewing the Form 5500 annually to ensure that the form is collecting all the information required by law. However, the form still lacks key information that could better assist Labor, IRS, and PBGC in identifying and tracking all plans over time and monitoring multiemployer plans. Federal and private sector researchers also told us the form could collect better plan financial information, such as 40l(k) plan fees. In addition, federal agency officials told us certain information could be reported earlier than the current filing deadline, such as information on a plan's funding status, as well as its assets and liabilities.
In fiscal year 2000, VA’s Veterans Health Administration (VHA) provided primary and specialty medical care to approximately 3.2 million veterans at a cost of about $18 billion. VA’s pharmacy benefit cost approximately $2 billion—about 12 percent of the total VHA budget—and provided approximately 86 million prescriptions. In contrast, 10 years ago VA’s pharmacy benefit represented about 6 percent of VA’s total health care budget. Health care organizations’ efforts to control pharmacy costs and improve quality of care include (1) implementing formularies that limit the number of drug choices available; (2) establishing financial incentives, such as variable copayments, to encourage the use of formulary drugs; (3) using compliance programs, such as prior authorization, that encourage or require physicians to prescribe formulary drugs; and (4) developing clinical guidelines for prescribing drugs. VA does not have authority to use financial incentives to encourage compliance with its formulary. VA provides outpatient pharmacy services free to veterans receiving medications for treatment of service-connected conditions and to low-income veterans whose incomes do not exceed a threshold amount. Other veterans who have prescriptions filled by VA may be charged $2 for each 30-day supply of medication. In 1995, VA began transforming its delivery and management of health care to expand access to care and increase efficiency. As part of this transformation, VA decentralized decision-making and budgeting authority to 22 VISNs, which became responsible for managing all VA health care. VISNs were given substantial operational autonomy. Although VISN and medical center directors are held accountable in annual performance agreements for meeting certain national and local goals, attaining formulary goals has not been part of their performance standards. VA medical centers began using formularies as early as 1955 to manage their pharmacy inventories. Because of the geographic mobility of VA patients, VA officials believed that a national formulary would improve veterans’ continuity of care. In September 1995, VA established a centralized group to manage its pharmacy benefit on a nationwide basis. In November 1995, VISNs were established, and the Under Secretary for Health directed each VISN to develop and implement a VISN-wide formulary. To develop their formularies, the VISNs generally combined existing medical center formularies and eliminated rarely prescribed drugs. VISN formularies became effective on April 30, 1996. Also in 1996, the Congress required VA to improve veterans’ access to care regardless of the region of the United States in which they live. As part of its response, VA implemented a national drug formulary on June 1, 1997, by combining the core set of drugs common to the newly developed VISN formularies. In addition to the national and VISN formularies, a few medical centers retained their own formularies. VA’s Pharmacy Benefits Management Strategic Healthcare Group (PBM) is responsible for managing the national formulary list, maintaining databases that reflect drug use, and monitoring the use of certain drugs. VISN directors are responsible for implementing and monitoring compliance with the national formulary, ensuring that VISN restrictions placed on national formulary products are appropriate, and ensuring that a nonformulary drug approval process is functioning in all of their medical centers. As all formularies do, VA’s national formulary limits the number of drug choices available to health care providers. VA’s formulary lists more than 1,100 unique drugs that are assigned to 1 of 254 drug classes—groups of drugs similar in chemistry, method of action, or purpose of use. After performing reviews of drug classes representing the highest costs and volume of prescriptions, VA decided that some drugs in 4 of its 254 drug classes were therapeutically interchangeable—that is, essentially equivalent in terms of efficacy, safety, and outcomes—and therefore had the same therapeutic effect. This determination allowed VA to select one or more of these drugs for its formulary to seek better prices through competitively bid committed-use contracts. Other therapeutically equivalent drugs in these classes were then excluded from the formulary. These four classes are known as “closed” classes. VA has not made clinical decisions regarding therapeutic interchange in the remaining 250 drug classes, and it does not limit the number of drugs that can be added to these classes. These are known as “open” classes. In some cases, drugs listed on the national formulary may be restricted. Restrictions are generally placed on the use of drugs if they have the potential to be used inappropriately. For example, restrictions are placed on drugs with potentially serious side effects, such as interferon, which is used to treat such conditions as hepatitis C. VA has also adopted guidelines to assist practitioners in making decisions about the diagnosis, treatment, and management of specific clinical conditions, such as congestive heart failure. In addition, VA has adopted criteria to help standardize treatment, improve the quality of patient care, and promote cost-effective drug prescriptions. Finally, VA limits prescribing privileges for some drugs to specially trained physicians and requires consultation with a specialist before certain drugs can be prescribed. VA has made significant progress in establishing a national formulary, with most drugs being prescribed from the formulary list. Nevertheless, VA’s oversight has not been sufficient to ensure that it is fully achieving its national formulary goal of standardizing its drug benefit nationwide. We found that some facilities have omitted required national formulary drugs. In addition, the extent to which VISNs add drugs to supplement the national formulary has the potential for conflicting with VA’s ability to achieve standardization if not closely managed. Also, we found that some facilities, contrary to policy, have modified the list of drugs available in closed classes. Almost 3 years after VA facilities were directed to make available locally all national formulary drugs, two of the three medical centers we visited did not list all national formulary drugs in the formularies used by their prescribers. VHA’s national formulary policy directive states that items listed on the national formulary shall be made available throughout the VA health care system and must be available in all VA facilities. While a physical supply of all national formulary drugs is not required to be maintained at all facilities, if a clinical need for a particular formulary drug arises in the course of treating a patient, it must be made available to the patient. Many drugs listed on the national formulary were not available as formulary choices in two of the three medical centers we visited. In the first, about 25 percent (286 drugs) of the national formulary drugs were not available as formulary choices. These included drugs used to treat high blood pressure and mental disorders, as well as drugs used to treat the unique medical needs of women. At the second medical center, about 13 percent (147 drugs) of the national formulary drugs were omitted, including drugs used to treat certain types of cancer and others used to treat stomach conditions. Health care providers at these two medical centers were required to seek nonformulary drug approvals for over 22,000 prescriptions of national formulary drugs from October 1999 through March 2000. If the national formulary had been properly implemented at these medical centers, prescribers would not have had to use extra time to request and obtain nonformulary drug approvals for these drugs, and patients could have started treatment earlier. Our analysis showed that over 14,000 prescriptions were filled as nonformulary drugs for 91 of the 286 drugs at the first center. No prescriptions were filled for the remaining 195 drugs. At the other medical center, over 8,000 prescriptions for 23 of the 147 drugs were filled as nonformulary drugs. No prescriptions were filled for the remaining 124 drugs. VA’s policy allowing VISNs to supplement the national formulary locally has the potential for conflicting with VA’s ability to achieve standardization if not closely managed. From June 1997 through March 2000, VISNs added 244 unique drugs to supplement the list of drugs on the national formulary. The number of drugs added by each VISN varies widely, ranging from as many as 63 by VISN 20 (Portland) to as few as 5 by VISN 8 (Bay Pines). (Fig. 1 shows the number of drugs added by each VISN.) Adding drugs to supplement the national formulary is intended to allow VISNs to be responsive to the unique needs of their patients and to allow quicker formulary designation of new FDA-approved drugs. However, the wide variation in the number of drugs added by the VISNs to supplement the national formulary raises concern that this practice, if not appropriately monitored, could result in unacceptable decreases in formulary standardization. VA officials have acknowledged that this variation affects standardization and told us they plan to address it. For example, the PBM plans to review new drugs when approved by the FDA to determine if they will be added to the national formulary or if VISNs may continue to add them to their formularies to supplement the national formulary. The medical centers we visited also inappropriately modified the national formulary list of drugs in the closed classes. Contrary to VA formulary policy, two of three medical centers added two different drugs to two of the four closed classes, and one facility did not make a drug available (see fig. 2). While our analysis was performed at the medical center level, the IOM found similar nonconformity at the VISN level. Specifically, IOM reported that 16 of the 22 VISNs modified the list of national formulary drugs for the closed classes. From October 1999 through March 2000, 90 percent of VA outpatient prescriptions were written for national formulary drugs. The percentage of national formulary drug prescriptions filled by individual VISNs varied slightly, from 89 percent to 92 percent. We found wider variation among medical centers within VISNs—84 percent to 96 percent (see table 1). The remaining 10 percent of prescriptions filled systemwide were for drugs VISNs and medical centers added to supplement the national formulary or for nonformulary drugs. VA’s PBM and IOM estimate that drugs added to supplement the national formulary probably account for about 7 percent of all prescriptions filled and nonformulary drugs account for approximately 3 percent of all prescriptions filled. However, at the time of our review, VA’s nationwide data could identify a filled prescription only as either a national formulary drug or not. Without specific information, VA does not know if the additions are resulting in an appropriate balance between local needs and national formulary standardization. VA officials told us that they are modifying the database to enable it to identify which drugs are added to supplement the national formulary and which are nonformulary. Medical center approval processes for nonformulary drugs are not always timely, and the amount of time needed to obtain such approvals varied widely across medical centers. In addition, some VISNs have not established processes to collect and analyze data on nonformulary requests. As a result, VA does not know if approved requests met its established criteria or if denied requests were appropriate. Although the national formulary directive requires certain criteria for approval of nonformulary drugs, it does not dictate a specific nonformulary approval process. As a result, the processes health care providers must follow to obtain nonformulary drugs differ among VA facilities regarding how requests are made, who receives them, who approves them, and how long it takes. In addition, IOM documented wide variations in the nonformulary drug approval process. Figure 3 shows the steps prescribers must generally follow to obtain nonformulary and formulary drugs. The person who first receives a nonformulary drug approval request may not be the person who approves it. For example, 61 percent of prescribers reported that nonformulary drug requests must first be submitted to a facility pharmacist, 14 percent said they must first be submitted to facility pharmacy and therapeutics (P&T) committees, and 8 percent said they must first be sent to service chiefs. In contrast, 31 percent of prescribers reported that it is a facility pharmacist who approves nonformulary drug requests, 26 percent said that the facility P&T committee approves them, and 15 percent told us that the facility chief of staff approves them. The remaining 28 percent reported that various other facility officials or members of the medical staff approve nonformulary drug requests. The time required to obtain approval for use of a nonformulary drug varied greatly depending on the local approval processes. The majority of prescribers (60 percent) we surveyed reported that it took an average of 9 days to obtain approval for use of nonformulary drugs. But many prescribers also reported that it took only a few hours (18 percent) or minutes (22 percent) to obtain such approvals. During our medical center visits, we observed that some medical center approval processes are less convenient than others. For example, to obtain approval to use a nonformulary drug in one facility we visited, prescribers were required to submit a request in writing to the P&T committee for its review and approval. Because the P&T committee met only once a month, the final approval to use the requested drug was sometimes delayed as long as 30 days. The requesting prescriber, however, could write a prescription for an immediate 30-day supply if the medication need was urgent. In contrast, in another medical center we visited, a clinical pharmacist was assigned to work directly with health care providers to help with drug selection, establish dose levels, and facilitate the approval of nonformulary drugs. In that facility, clinical pharmacists were allowed to approve the use of nonformulary drugs. If a health care provider believed that a patient should be prescribed a nonformulary drug, the physician and pharmacist could consult at the point of care and make a final decision with virtually no delay. Prescribers in our survey were almost equally divided on the ease or difficulty of getting nonformulary drug requests approved (see table 2). Regardless of whether the nonformulary drug approval process was perceived as easy or difficult, the vast majority of prescribers told us such requests were generally approved. According to our survey results, 65 percent of prescribers sought approval for a nonformulary drug in 1999. These prescribers reported that they made, on average, 25 such requests (the median was 10 requests). We estimated that 84 percent of all prescribers’ nonformulary requests were approved. When a nonformulary drug request was disapproved, 60 percent of prescribers reported that they switched to a formulary drug. However, more than one-quarter of the prescribers who had nonformulary drug requests disapproved resubmitted their requests with additional information. The majority of prescribers we surveyed told us they were more likely to convert VA patients who were on nonformulary drugs obtained at another VA facility to formulary drugs than to request a nonformulary drug (see table 3). Consequently, patients who move from one area of the country to another or temporarily seek care in a different VA facility are likely to be switched from a nonformulary drug to a formulary drug. VA’s national formulary policy requires that a request to use a nonformulary drug be based on at least one of six criteria: (1) the formulary agent is contraindicated, (2) the patient has had an adverse reaction to the formulary agent, (3) all formulary alternatives have failed therapeutically, (4) no formulary alternative exists, (5) the patient has previously responded to the nonformulary agent and risk is associated with changing to the formulary agent, and (6) other circumstances involving compelling evidence-based reasons exist. Each VISN is responsible for establishing a process to collect and analyze data concerning nonformulary drug requests. Contrary to the national formulary policy, not all VISNs have established a process to collect and analyze nonformulary request data at the VISN and local levels. Twelve of VA’s 22 VISNs reported that they do not collect information on approved and denied nonformulary drug requests. Three VISNs reported that they collect information only on approved nonformulary drug requests, and seven reported that they collect information for both approved and denied requests. Consequently, data that could help VISNs, medical centers, and the PBM offices are not always collected and analyzed for trends in a systematic manner. Such information could help VA at all levels to determine the extent to which nonformulary drugs are being requested and whether medical center processes for approving these requests meet established criteria. In its report, IOM noted that inadequate documentation could diminish confidence in the nonformulary process. Seventy percent of VA prescribers in our survey reported that the formulary they use contains the drugs their patients need either to a “great extent” or to a “very great extent.” Twenty-seven percent reported that the formulary meets their patients’ needs to a “moderate extent,” with 4 percent reporting that it meets their patients’ needs to “some extent.” No VA prescribers reported that the formulary meets their patients’ needs to a “very little or no extent.” This is consistent with IOM’s conclusion that the VA formulary “is not overly restrictive.” Overall, two and one-half times as many prescribers indicated that the formulary they currently use “helps” or “greatly helps” their ability to prescribe drugs as those who said it “hinders” or “greatly hinders” them (see table 4). Some prescribers reported that the formulary they use helps them keep current with new drugs and helps remove some of the pressures created by direct-to-consumer advertising. Other prescribers reported that newly approved drugs are not made available on the national formulary as soon as they would like, and some reported their frustration with delays experienced when certain formulary drugs must be approved by specially trained physicians before they can be prescribed. Prescribers we surveyed reported they were generally satisfied with the national formulary. We asked prescribers who said that they had worked for VA before the national formulary was established whether the current formulary does a better job of keeping the list of drugs in the drug classes from which they most frequently prescribe up to date, as compared with the formulary they used to use. Three-quarters told us that they had worked for VA before the national formulary was implemented in June 1997. Thirty- eight percent of these prescribers reported that the national formulary was “better” or “considerably better” than previous formularies. About half (48 percent) indicated that the current formulary was “about the same” as the one it replaced. Seven percent reported that it was “worse” or “considerably worse” than previous formularies. Few veterans have complained about not being able to obtain the drugs they believe they need. At the VA medical centers we visited, patient advocates told us that veterans made very few complaints concerning their prescriptions. In its analysis of the patient advocates’ complaint databases, IOM found that less than one-half of one percent of veterans’ complaints were related to drug access. IOM further reported that complaints involving specific identifiable drugs often involved drugs that are marketed directly to consumers, such as sildenafil (Viagra), which is used to treat erectile dysfunction. Fifty-one percent of the prescribers in our survey reported that over the past 3 years, an increasing number of their patients have requested a drug they have seen or heard advertised in the media. Our review also indicated that the few prescription complaints made were often related to veterans’ trying to obtain “lifestyle” drugs or refusals by VA physicians and pharmacists to fill prescriptions written by non-VA health care providers. VA officials told us that VA does not fill prescriptions written by non-VA-authorized prescribers, in part to ensure that one practitioner manages a patient’s care. Over the past 3½ years, VA has made significant progress in establishing its national formulary, which has generally met with prescriber acceptance. Prescribers reported that veterans are generally receiving the drugs they need and that veterans rarely register complaints concerning prescription drugs. VA has not provided sufficient oversight, however, to ensure that VISNs and medical centers comply with formulary policies and that the flexibility given to them does not unduly compromise VA’s goal of formulary standardization. Contrary to VA formulary policy, some facilities omitted national formulary drugs or modified the closed drug classes. While adding a limited number of drugs to supplement the national formulary is permitted, as more drugs are added by VISNs, formulary differences among facilities are likely to become more pronounced, decreasing formulary standardization. While VA recognizes the trade-off between local flexibility and standardization, it lacks criteria for determining the appropriateness of adding drugs to supplement the national formulary. Consequently, VA cannot determine whether the resulting decrease in standardization is acceptable. Not all VISN directors have met their responsibilities for implementing national formulary policy. Inefficiencies that exist in the nonformulary drug approval processes across the system can cause delays in making final treatment decisions. In addition, the processes require health care provider time and energy that might be better used for direct patient care. We believe a more efficient nonformulary drug approval process could enable facilities to benefit from lessons learned in other locations. Finally, VISNs lack the data needed to analyze nonformulary drug requests to determine whether all approved requests met approval criteria and all denied requests were appropriate. In order to ensure more effective management of the national formulary, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following actions: Establish a mechanism to ensure that VISN directors comply with national formulary policy. Establish criteria that VISNs should use to determine the appropriateness of adding drugs to supplement the national formulary and monitor the VISNs’ application of these criteria. Establish a nonformulary drug approval process for medical centers that ensures appropriate and timely decisions and provides that veterans for whom a nonformulary drug has been approved will have continued access to that drug, when appropriate, across VA’s health care system. Enforce existing requirements that VISNs collect and analyze the data needed to determine that nonformulary drug approval processes are implemented appropriately and effectively in their medical centers, including tracking both approved and denied requests. In commenting on a draft of this report, VA agreed with our findings and concurred with our recommendations. VA highlighted key improvements planned or already in progress that should further enhance the process. VA’s actions to address our recommendations are summarized below. VA plans to improve oversight at all organizational levels to help facilitate consistent compliance with national formulary policy. In its comments, VA discussed important components of improving compliance with the national formulary, including examining data to identify outliers. However, VA did not articulate a mechanism for ensuring that its oversight results in consistent compliance, which may reduce the effectiveness of its planned actions. VA plans to establish criteria for VISNs to use to determine the appropriateness of adding drugs to supplement the national formulary. VA plans to establish steps for its nonformulary drug approval process that all medical centers and VISNs must follow. However, in its comments, VA did not specifically address how veterans would have continued access to previously approved nonformulary drugs across VA’s health care system. We believe such access is important. VA plans to establish steps for reporting its nonformulary approval activities. In its comments, VA did not explicitly include tracking of denied requests as part of the nonformulary approval activities. We expect that its nonformulary approval activities will include tracking denied requests, as well as approved nonformulary drug requests, to determine the appropriateness of all medical center prescribing decisions. VA plans to implement these corrective actions by June 2001. Its comments are included in appendix II. We are sending copies of this report to the Honorable Anthony J. Principi, Secretary of Veterans Affairs; appropriate congressional committees; and other interested parties. We will also make copies available to others upon request. Please call me at (202) 512-7101 if you or your staff have questions about this report or need additional assistance. Another contact and staff acknowledgments are listed in appendix III. To obtain policies and procedures from the 22 Veterans Integrated Service Networks (VISN), we mailed a questionnaire to each of the 22 VISN formulary leaders—pharmacists or physicians who serve on the Department of Veterans Affairs’ (VA) Pharmacy Benefits Management advisory board. To determine the extent to which VA health care providers write prescriptions for national formulary drugs, we analyzed data from VA’s national outpatient prescription database. To assess the implementation of the national formulary and obtain firsthand opinions about it, we interviewed medical and administrative staff at three VA medical centers located in three different VISNs. To obtain VA health care providers’ views on VA’s formulary, including whether or not it is restrictive, we mailed a questionnaire to a nationally representative sample of 2,000 VA health care prescribers. We also used information contained in the Institute of Medicine’s Description and Analysis of the VA National Formulary, issued in June 2000. To obtain policies and procedures from the 22 VISNs, we mailed a questionnaire to VISN formulary leaders. We asked if there were VISN- wide policies for several areas, including adding drugs to the VISN formulary, requesting nonformulary drugs, converting patients from one drug to another, and tracking requests for nonformulary drugs. In addition, we sought information on the number of drugs added to and dropped from the VISN formulary, the number of requests for nonformulary drugs, and the number of requests that were approved and denied. All 22 VISN formulary leaders completed and returned questionnaires. VA’s national database on outpatient prescriptions contains information for each outpatient prescription filled at each VA medical center, including the drug prescribed, date of the prescription, patient and prescriber identifiers, medical center responsible for filling the prescription, and whether the prescribed drug is a national formulary drug. We used this database to develop a sample of VA health care providers who wrote prescriptions, determine the total number of outpatient prescriptions filled at VISNs and VA medical centers, determine the number of filled outpatient prescriptions written for national formulary drugs within a certain time frame, and determine how many VISN formulary drug prescriptions were filled in the three VISNs where we performed site visits. We interviewed PBM headquarters officials who had either oversight or maintenance responsibility for the database to help assess the validity and reliability of the outpatient prescription data. We also performed our own analytic checks of the data. We found that data critical to our analysis—the data field indicating whether a prescription had been written for a national formulary drug—contained errors. We worked with PBM officials to correct the data, and they implemented a monthly routine to detect and correct these errors in the future. We reran our data checks, verified that the database had been corrected, and concluded that the data were acceptable for the purposes of our work. To assess formulary implementation at the local level, we interviewed medical and administrative staff at three different VA medical centers—one located in Biloxi, Mississippi (VISN 16); one in Gainesville, Florida (VISN 8); and one in Omaha, Nebraska (VISN 14). We selected these VISNs and medical centers on the basis of formulary drug use from October through December 1999, the period for which the most recent and complete data were available at the time we did our work. For example, VISN 8 had the highest percentage of prescriptions for national formulary drugs (93 percent), VISN 16’s percentage of national formulary drug prescriptions was at the national average (90 percent), and VISN 14 had the lowest percentage of prescriptions filled using national formulary drugs (88 percent). We mailed questionnaires to a representative sample of 2,000 VA health care prescribers whose prescriptions had been dispensed from October 1 through December 31, 1999, to obtain their opinions and experiential data on various aspects of VA’s national formulary. We drew this random sample from VA’s most recent national outpatient prescription database—a data file that contains information, including a prescriber identifier, on all outpatient prescriptions filled in the VA health care system. We mailed questionnaires to the entire sample of prescribers on April 17, 2000, with follow-up mailings on May 17 and June 21 to those who had not responded by those dates. We accepted returned questionnaires through September 1, 2000. Some prescribers’ responses indicated that they did not write prescriptions for drugs; their prescription privileges were limited to medical and surgical supplies, such as diabetic strips and food supplements. Other returned questionnaires indicated that the addressee had either left or retired from VA. These providers were thus considered ineligible for our purposes and were removed from the sample. Approximately 11 percent of the questionnaires were returned as undeliverable, and we received no response from approximately 16 percent of those to whom we mailed questionnaires. After adjusting the sample accordingly, we determined the number of useable returned questionnaires to be 1,217—a response rate of about 69 percent. (See table 5.) Because this was a simple random sample, we believe that our results are projectable to all of VA’s health care providers who have outpatient drug prescribing privileges. Surveys based on a sample are subject to sampling errors. Sampling error represents the extent to which a survey’s results differ from what would have been obtained had everyone in the universe of interest received and returned the same questionnaire—in this case, all VA health care providers who have outpatient drug prescribing privileges. Sampling errors have two elements: the width of the confidence interval around the estimate (sometimes referred to as the precision of the estimate) and the confidence level at which the confidence interval is computed. The confidence interval reflects the fact that estimates actually encompass a range of possible values, not just a single value, or a “point estimate.” The interval is expressed as a point, plus or minus some value. For example, in our questionnaire, we asked prescribers, “To what extent does your VA formulary contain the drugs you believe your patients need?” The percentage of respondents who reported a “great extent” or “very great extent” was 69.1. This particular question had a confidence interval of plus or minus 2.6 percentage points. Thus, the “true” answer for this question may or may not be 69.1 percent, but it has a high probability of falling between 66.5 and 71.7 percent (69.1-percent point estimate, plus or minus 2.6 percentage points). Confidence intervals vary for individual questions (depending upon how many of the individuals who could have answered a question actually did so), but, unless otherwise noted, all percentages presented in this report are within a range of plus or minus 3.5 percentage points. The confidence level is a measure of how certain we are that the “true” answer lies within a confidence interval. We used a 95-percent confidence level, which means that if we repeatedly took new samples of prescribers from the October through December prescription database and performed the same analysis of their responses each time, 95 percent of these samples would yield estimates that would fall within the confidence interval stated. In the previous example, this means that we are 95-percent certain that between 66.5 and 71.7 percent of prescribers believe that the VA formulary contains to a “great extent” or “very great extent” the drugs they believe their patients need. Surveys can also be subject to other types of systematic error or bias that can affect results, known as nonsampling errors. One potential source of nonsampling error can be the questionnaire itself. To ensure that questions were clear and unbiased, we consulted with subject matter and questionnaire experts within GAO and obtained comments from individuals representing VA’s PBM and medical advisory panel, a working group of 11 practicing VA physicians and 1 practicing Department of Defense physician who help manage VA’s national formulary, as well as individuals representing the Institute of Medicine. Finally, the questionnaire was tested with 14 VA prescribers in VA medical centers in four locations: Phoenix, Arizona; Washington, D.C.; Hampton, Virginia; and Cincinnati, Ohio. Prescribers were asked to provide demographic and VA employment information as well as opinions about the relevance and usefulness of VA’s formulary. On average, VA prescribers in our sample have worked for VA for 11 years, with most of those years at their current medical facility. Physicians and nurses constitute the largest groups of prescribers (65 and 15 percent, respectively), followed by physician assistants (7 percent) and other allied health professionals, such as dentists (14 percent). Most of the prescribers’ time working in VA is spent treating patients—on average, 26 hours each week. According to the national prescription file from which we drew our sample, VA prescribers who completed our questionnaire averaged 849 prescription fills from October through December 1999, the 3- month period we chose as the basis of our survey. The median number of filled prescriptions was relatively low—252—because a few prescribers had a large number of prescriptions filled during the period, while many prescribers had only a few prescriptions filled. George Poindexter, Stuart Fleishman, Mike O’Dell, and Kathie Kendrick made key contributions to this report. State Pharmacy Programs: Assistance Designed to Target Coverage and Stretch Budgets (GAO/HEHS-00-162, Sept. 6, 2000). Prescription Drug Benefits: Applying Private Sector Management Methods to Medicare (GAO/T-HEHS-00-84, Mar. 22, 2000). VA Health Care: VA’s Management of Drugs on Its National Formulary (GAO/HEHS-00-34, Dec. 14, 1999). Prescription Drug Benefits: Implications for Beneficiaries of Medicare HMO Use of Formularies (GAO/HEHS-99-166, July 20, 1999). Defense Health Care: Fully Integrated Pharmacy System Would Improve Service and Cost-Effectiveness (GAO/HEHS-98-176, June 12, 1998). The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
During the last three years, the Department of Veterans Affairs (VA) has made significant progress in establishing its national drug formulary, which has generally met with the prescriber acceptance. Most veterans are receiving the drugs the need and rarely register complaints about prescription drugs. However, VA has not been sufficient to ensure that the Veterans Integrated Service Networks (VISN) and medical centers comply with formulary policies and that the flexibility given to them does not compromise VA's goal of formulary standardization. Contrary to VA formulary policy, some facilities omitted national formulary drugs or modified the closest drug classes. Although a limited number of drugs to supplement the national formulary is permitted, formulary differences among facilities are likely to become more pronounced, as more drugs are added by VISNs, decreasing formulary standardization. VA recognizes the trade-off between local flexibility and standardization, but it lacks criteria for determining the appropriateness of adding drugs to supplement the national formulary and therefore may not be able to determine whether the decrease in standardization is acceptable.
You have requested that we investigate these relationships more comprehensively and that we set the discussion of car size and safety into the larger context of the relative contributions to highway safety of driver attributes, vehicle characteristics, and their multiple interactions. Our response to your request involved an investigation into two distinct, and sometimes highly divergent, aspects of highway safety: crash involvement and crashworthiness. The study of crash involvement focuses attention on the factors likely to produce a crash. Crashworthiness, instead, examines the factors likely to produce serious injury, once a crash has occurred. The present report deals with crash involvement—that is, with the driver or vehicle characteristics that are related to the likelihood of a crash. The attributes we examined included driver age, gender and driving history, vehicle age and size (weight, wheelbase, and engine displacement). A companion report will examine crashworthiness: the factors that affect the likelihood of serious injury once a crash has taken place. A third report will examine the relationship between automobile crashworthiness and crash testing performed by the Department of Transportation. In the present analysis, we have used a method known as “induced exposure” to estimate the likelihood of crash involvement. This approach assumes that not-at-fault drivers in two-vehicle accidents represent a random selection of drivers and vehicles on the road. The ratio of at-fault to not-at-fault drivers provides a measure of the relative involvement of drivers and vehicles in accident causation. We used a data base containing 340,000 records, with details on accidents reported in North Carolina in 1990, to produce ratios of at-fault to not-at-fault North Carolina drivers. Since these findings are based on data from only one state, they cannot be generalized to the nation. However, we did compare the North Carolina ratios to ratios we obtained from a Michigan data base and found the figures to be close and the trends quite similar. This finding is consistent with the logic of induced exposure—concerned with ratios of driver and vehicle characteristics rather than their absolute numbers—and suggests that the method may produce results that have more general applicability. (See appendix I for a discussion of the induced exposure approach and appendix II for descriptive statistics from North Carolina.) Any investigation of crash involvement must include more than counts of units (vehicles or drivers). In order to calculate the relative odds of being in a serious crash, it is necessary (but not sufficient) to compute, for example, how many 1989 Ford Tauruses or how many 16-year-old males are involved in serious crashes in a given time period. Without knowing how many Tauruses or 16-year-old male drivers are on the road, we cannot conclude whether these cars or these drivers are more or less likely than other cars or drivers to be involved in crashes. We must, in other words, know their exposure to crashes. For example, consider that it is generally well known that, in absolute terms, elderly drivers are involved in fewer serious crashes than younger drivers. But they also drive fewer miles, and under less hazardous conditions, than younger drivers. In absolute terms, therefore, elderly drivers pose a rather small highway safety problem. When their relative exposure is considered, however, it turns out that, for the miles they drive, elderly persons are disproportionately involved in collisions, particularly two-vehicle collisions. Crash exposure can be estimated in a number of ways. Vehicle exposure in a given year is frequently measured by the number of vehicles registered. Thus, in our previous report, we tracked the number of fatalities per 100,000 registered vehicles for different weight classes of cars. Driver exposure can also be represented by a single count of the number of licensed drivers in various categories (for instance, age groups or geographic regions). Such direct measures of exposure have serious limitations, however. While we may know how many vehicles of a certain type are registered, we do not know how many miles (if any) and under what conditions they are driven or by whom they are driven. If large cars are driven more miles, and under more dangerous conditions, an estimate of crash involvement based simply on the number of crashes per registered vehicle or even—if such data were available—on crashes per mile driven would underestimate their exposure and their safety. For this reason, some researchers have turned to methods of estimating exposure indirectly. For example, some calculate crash rates from a crash data base as the ratio of at-fault to not-at-fault drivers of a certain type (say, young females), arguing that the not-at-fault drivers serve as a representative sample of drivers on the road—or “exposed”—under the conditions represented by the data base. This method has the practical advantage of allowing exposure estimates to be derived from the same data base as the count of crashes and, arguably, the strategic advantage of being more sensitive to the variations of driver and vehicle characteristics than is possible with direct measures (see appendix I). For this study, we employed such an indirect or “induced exposure” method. We applied this method to the police-reported crash data base of North Carolina for 1990 that was provided to us by researchers at the University of North Carolina Highway Safety Research Center. This data base contains information on 183,616 crashes involving 484,258 individuals and 325,277 vehicles. We supplemented the crash data base by merging with it information on the drivers’ history of previous traffic violations. We performed separate logistic regression analyses of crash involvement corresponding to three types of crashes (two-vehicle, single-vehicle rollover, and single-vehicle nonrollover) and two types of vehicles—(1) passenger cars and (2) light trucks and vans. Sixty-six percent of the crashes in our analysis involved two vehicles, 29 percent were single-vehicle nonrollovers, and 5 percent were rollovers. (Although rollovers accounted for only a small proportion of crashes, this type of crash is second only to frontal impacts in terms of deaths and injury severity.) Sixty-eight percent of crashes involved passenger cars, 11 percent involved light trucks and vans, and 21 percent were between cars and light trucks and vans. Appendix III presents the details of the analyses of passenger cars, appendix IV the light truck and van results. We present the main points here, first for passenger cars and, then, more briefly, for light trucks and vans. We found no straight-line relationship between a driver’s age and crash involvement. In general, drivers under 25 were at greatest crash risk, followed by drivers over 65. The relationship was not the same for all crash types. A 16-year-old driver was over seven times more likely to be in a single-vehicle rollover crash, over five times more likely to be in a single-vehicle nonrollover crash, and more than twice as likely to be in a two-vehicle crash as was the safest driver overall—a 45-year-old. Drivers least likely to be in a single-vehicle rollover crash were 62-year-olds. They were only one tenth as likely to be in such a crash as 16-year-olds. However, drivers in their mid-70s were about as likely as the 16-year-olds to become involved in a two-vehicle collision. As a driver’s age approached 80 years, the likelihood of such involvement in a two-vehicle collision increased sharply. This was not true, however, of single-vehicle crashes. Elderly drivers were more likely to be involved in single-vehicle nonrollovers than 40-year-olds only after age 74 and in single-vehicle rollovers only after age 86. Figure 1 summarizes the effects of age by comparing each age’s odds of crash involvement in each crash type with those of a 40-year-old’s. Driving history was a strong predictor of crash involvement for two-car and single-car crashes, ranking second only to driver age. A history of alcohol-related convictions was a particularly powerful predictor. For example, drivers with histories of nonalcohol traffic violations were only 1.15 times as likely to be involved in a single-car nonrollover crash as drivers with a “clean” history. However, drivers with a history of drunk driving were at least 3.7 times as likely as other drivers to be involved in such a crash. For two-car crashes, driving history was also a significant but less powerful predictor. Drivers with prior alcohol violations were 2.1 times as likely to be involved in a two-vehicle collision as drivers with no prior violations and 1.6 times as likely as drivers with nonalcohol violations. As noted earlier, driver gender affected the likelihood of involvement in single-vehicle crashes only. Males were twice as likely as females to be involved in either type of single-vehicle crash. Female drivers were indistinguishable from male drivers in their likelihood of being involved in a two-car collision. We introduced the age of vehicles into our model as a way of correcting for the possibility that we might confuse the safety effect of a vehicle size with that of its condition. As our earlier report found, passenger cars have become, on the average, much lighter than they were in the 1970’s. Heavier cars, therefore, are more likely to be older cars and, presumably, to be in poorer condition. An analysis that did not control for this association would be in danger of overestimating the crash involvement of heavy cars. Half of the cars in our data base were model year 1984 or newer, and 80 percent were built after 1978. We found that, regardless of size, newer cars were slightly less at risk for crash involvement. For example, if one car were 5 years older than another, the older car would have a risk 1.12 times that of the newer. We cannot tell, however, whether this difference stems from the deteriorated condition of the older car or the improved design of the newer car. It should also be noted that (as the Department of Transportation (DOT) pointed out in its comments on a draft of this report) vehicle age may capture the effect of more than simply vehicle characteristics. Older cars may have more aggressive drivers and are more likely to be found in rural settings. Car size can be expressed by different measures: wheelbase (the distance between the front and rear axles), track width (the distance between the left and right wheels), engine size, weight, and so on. Because all these variables tend to be very highly correlated with one another, it is frequently difficult to distinguish statistically their unique effects. It seems reasonable to believe that each of these factors has a differential effect on the likelihood of being involved in different types of crashes. In one research report, for example, the National Highway Traffic Safety Administration (NHTSA) found that a combination of track width and center of gravity was the best predictor of vehicle rollover. We developed three sets of models corresponding to the three measures of car size readily available to us: wheelbase, weight, and engine displacement. The full results of these analyses are in appendix V. Here we are concerned with the relationship between car weight and crash involvement. Figure 2 summarizes this relationship by comparing the odds for cars of different weights with the odds for a 2,678-pound car (the median car weight in the sample) of being in a crash (for each of the three crash types). For each crash type, weight had a statistically significant effect, but the effect was quite small for two-vehicle crashes and for single-vehicle crashes where a rollover did not occur. The odds ratio curves for these two crash types are almost mirror images of each other. The lightest and the heaviest cars were slightly more likely to be involved in two-vehicle crashes than were midweight cars and slightly less likely to be involved in single-vehicle nonrollover crashes. The connection between car weight and rollover crashes, however, was substantially stronger. The lighter the car, the greater were its odds of rolling over. For example, the average 2,000-pound car was nearly three times as likely to be involved in a single-vehicle rollover crash as the average 4,500-pound car. This finding needs some qualification. Factors other than car weight are probably more directly related to rollover propensity but, as we noted earlier, the high intercorrelation of the various measures of car size make the relationships difficult to disentangle statistically. When we used car size measures other than weight in our analyses, we found a stronger connection with rollover likelihood for wheelbase than for weight. (See appendix V.) Research by NHTSA has demonstrated that rollover propensity is related to several other vehicle factors, such as track width, weight distribution, and braking stability. Our analysis of the crash involvement probability of light trucks and vans yielded many of the same findings as our analysis of passenger cars. Full details of the analysis are presented in appendix IV. Driver age and driving history remained by far the best predictors of crash involvement. Drivers involved in single-vehicle light truck crashes were one-and-a-quarter to one-and-a-third times more likely to be male. However, whereas for passenger cars driver gender appeared to be irrelevant to involvement in two-vehicle collisions, women were slightly but significantly more likely to be involved in a light truck two-vehicle collision than were men. As with passenger cars, the vehicle factors were much less important than the driver factors. However, older light trucks were significantly more likely to be involved in all types of crashes. The relationship between light truck weight and crash involvement was weaker than for passenger cars. We found no relationship in two-vehicle crashes and only a marginally significant relationship in single-vehicle crashes. The connection between light truck weight and crash involvement was relatively strongest for rollover crashes. As with passenger cars, the lightest of these vehicles were more likely to roll over. However, all three alternative measures of size again contributed relatively little to predictions of crash involvement, and vehicle weight ranked either second or third among the size measures in all light truck models. (See appendix V.) The use of indirect methods to estimate the risk of being involved in a highway crash, variously referred to as “induced” or “quasi-induced” exposure, dates back at least to the 1960’s. The method is based on calculating the ratio of at-fault drivers or vehicles to not-at-fault drivers or vehicles in two-vehicle accidents contained in police accident reports. Its underlying assumption is that the not-at-fault drivers and vehicles constitute a representative sample of the drivers, vehicles, and driving conditions and their interactions for the geographical area being examined. On the assumption that not-at-fault drivers represent the general population of drivers, the ratio of at-fault to not-at-fault drivers yields an estimate of the over- or underinvolvement of different levels of that dimension in highway crashes. R. W. Lyles et al. offer an example of estimating how much male drivers are overrepresented in interstate highway accidents. In 1988, 11,335 pairs of drivers were involved in two-vehicle accidents in which fault was assigned on Michigan interstate highways. Of the at-fault drivers, 8,366 (73.8 percent) were male, whereas only 7,528 (66.4 percent) of not-at-fault drivers were male. Males were 1.1 times (73.8/66.4) overinvolved in interstate accidents relative to their presence on these highways. Females, however, represented 26.2 percent of at-fault drivers and 33.6 percent of not-at-fault drivers. Their “involvement ratio,” therefore, was 0.78 (26.2/33.6). Lyles et al. conclude, therefore, that when the calculation is adjusted for exposure, males caused interstate highway accidents at a rate 1.4 (1.1/0.78) times that of females. This indirect approach has a number of advantages. Foremost among them is the ability to define accident exposure in terms of any driver, roadway, or vehicle characteristic reported in the accident data base being used. For example, given a sufficiently large data base, a researcher could estimate the crash involvement risk of female drivers under 25 years of age on rural roads in the dark and could determine whether female drivers are more likely than males to become involved in accidents under such conditions. Any attempt to measure exposure directly, at this level of detail, would be prohibitively expensive. Two major uncertainties are associated with induced exposure measures, however. The first is common to any state or regional data base and involves whether the data being used to form estimates adequately represent other geographical areas and, hence, the universe to which they are being extrapolated. In the case of induced exposure, the concern is not that the absolute count within subcategories of drivers of vehicles may vary from state to state—that is, that there might be, for example, more light trucks or more elderly drivers in one state than another. (This is most likely the case.) Rather, we are concerned with the ratios of at-fault to not-at-fault drivers, however the absolute counts may vary geographically. It is assumed that these ratios would be less prone to substantial variation from one area to another. In other words, it is much less probable that light trucks or elderly drivers have different driving-related attributes from one state to another than that their numbers vary geographically. Nevertheless, to test the seriousness of this concern, we compared by age and gender the accident involvement ratios we obtained from the 1990 North Carolina data base we used for our study with ratios we calculated from a data base of police-reported accidents in Michigan in 1987. In both cases, we looked strictly at two-vehicle accidents in which only one driver was considered at fault. The results are presented in table I.1. While there is not absolute agreement between the involvement ratios derived from the two data bases (the greatest discrepancy being between the results for the oldest, male drivers), the figures are quite close and the trends are remarkably similar. The second uncertainty associated with the use of induced exposure is potentially more serious and is less easily tested. Threats to the validity of these estimates have been suggested, in particular the possibility of systematic bias. It is possible that certain driver or vehicle types are more likely to be identified by police as being at fault in a two-vehicle accident. For example, in an ambiguous situation the police may be more inclined to place blame on a young driver. On a different plane, it is possible that the not-at-fault driver in a two-car accident is not totally without blame. This driver’s ability to avoid accidents may be less than average; hence, he or she may be more “accident-prone.” To the extent that accident-proneness exists, the not-at-fault population less than perfectly represents of the universe of drivers and vehicles. The existence of such a bias cannot be tested directly, but indications of whether its existence effectively distorts the estimates derived from induced exposure methods can be tested both by comparisons internal to the data base and by comparing the estimates with those derived from direct exposure measurements. Lyles et al. used internal tests to determine whether discernible bias entered their estimates. They reasoned that, if not-at-fault drivers represent drivers on the road, we should find variations in their characteristics that are related to different driving conditions. For example, we know from direct observation that drivers on major freeways are less likely to be female than drivers on more local roads. Induced exposure findings should be consistent with observation and, in fact, Lyles et al. found that 63 percent of not-at-fault drivers on U.S.-numbered routes in Michigan were male, as opposed to 57 percent on local streets. Furthermore, male at-fault drivers should strike approximately the same proportion of male not-at -fault drivers as do female at-fault drivers. This turned out to be the case. On U.S.-numbered routes, male at-fault drivers struck male drivers 63 percent of the time and females 37 percent of the time. Female at-fault drivers struck male drivers 62 percent of the time and females 38 percent of the time. Lyles et al. offer a series of similar crosschecks for the mutual independence of the at-fault and not-at-fault populations across a variety of conditions, including different roadway types, years, times of day, and driver age categories. While there were wide variations in the distribution of driver characteristics among different driving conditions, different subsets within the same condition yielded nearly identical estimates of exposure. Comparisons with estimates formed from direct exposure methods are less straightforward. We can, for example, obtain the distribution of licensed drivers by gender from any state. However, we know that this provides a biased estimate of drivers on the road. Using the Department of Transportation’s (DOT’s) 1990 National Personal Transportation Survey (NPTS), we found that over 15 percent of all women licensed drivers over age 75 had not driven at all in the previous year (as contrasted with less than 1 percent of licensed women drivers between 25 and 34). NPTS itself is perhaps our most comprehensive direct exposure source, and it is particularly valuable in discerning trends in travel habits in the United States over time. Yet, besides being subject to the weaknesses of human recollection, it is relatively insensitive to the quality of driving exposure. While its estimates of miles driven by respondents may be quite reliable, it cannot estimate the portion of miles driven under different conditions to the level of detail that, arguably, an induced exposure method can. Nevertheless, comparisons of induced exposure results with those derived from more direct methods are informative. Accordingly, we compared our estimates of age and gender distribution with those from NPTS. The comparisons are presented in table I.2 in terms of the percentage of vehicle miles driven (from NPTS) and the percentage of exposure to accidents as derived from our data. A comparison of the estimates of vehicle miles driven and of accident exposure illuminates the differences between the two measures. Put simply, not all miles are equal. It has been demonstrated that men tend to drive substantially more freeway miles than women and that freeway miles are the safest of all miles driven. These considerations are reflected in the substantial difference between the overall gender distribution estimates derived from the two measures. While men may drive nearly twice as many miles as women (65 percent versus 35 percent of all miles; see table I.2), these are more often highway miles and, thus, are substantially safer, with the result that their accident exposure is only moderately higher than women’s using the induced exposure approach. Similarly, young drivers drive fewer miles than middle-aged drivers, but their miles are considerably more dangerous both because of their timing (nights and weekends) and because of driver inexperience and risk-taking behavior. These relationships are shown in table I.2: using the induced exposure method, men age 45-54 represent 6 percent of the population at risk while men under 25 have twice the exposure (14 percent), whereas using NPTS the percentages are only slightly different. In summary, the induced exposure method offers a means of estimating the relative risks of different types of drivers, vehicles, and driving conditions at a level of refinement that cannot be approximated in practice by any direct measurement technique. It yields summary estimates of exposure that differ from the global estimates of direct measures such as vehicle miles traveled, but the differences appear to be reasonable in view of the larger number of factors taken into consideration by the induced exposure method. Its estimates of relative risk appear to be quite stable across different geographic, driver, vehicle, and roadway conditions. In classic measurement terms, while the method’s predictive validity has not been empirically demonstrated, evidence exists to support its reliability and construct validity. Its practical utility is beyond question. The data set for our analysis was created from data tapes, provided by North Carolina’s Division of Motor Vehicles, containing information on accidents in North Carolina for calendar year 1990. The information is derived from accident report forms filled out by investigating officers at accident scenes. The Highway Safety Research Center at the University of North Carolina added technical information concerning vehicles (such as vehicle weight, wheelbase, and engine size), which was obtained by decoding the vehicle identification numbers recorded on the accident forms. We also merged information, collected by the Division of Motor Vehicles, on drivers’ violation histories. The data file contained one record for each individual unit (vehicle, pedestrian, bicyclist, and so on) involved in the accident. Table II.1 provides counts of the types of individual records that were contained in the North Carolina file. Table II.2 provides the distribution of accident types in the data base. For the purposes of the current study, an accident was considered a single- or two-vehicle accident on the basis of the count of the number of in-motion, motorized vehicles involved. We excluded the accident category labeled “Other” in table II.2, which may contain single- or two-vehicle accidents if the type of vehicle involved was not reported or was a heavy truck, bus, or farm vehicle. The “Other” category also contains accidents with three or more in-motion vehicles. The outcome variable for the logistic regression equations was a dichotomous indicator of fault, coded “1” for at-fault and “0” for not-at-fault drivers. For all the equations presented here, both single- and two-vehicle accidents, the comparison group is not-at-fault drivers in two-vehicle accidents. A driver was considered at fault if the investigating police officer checked one or more violations in the checklist provided on the North Carolina accident report form. (In two-vehicle accidents, cases were excluded if no violation was reported for either driver or if both drivers had violations.) driver age, including a squared term to capture the curvilinear relationship between age and accident involvement; driver gender, with males coded “1” and females coded “0”; driver violation history, with four mutually exclusive categories: no previous traffic violations, one or more previous violations not involving alcohol, at least one alcohol violation (may also include nonalcohol violations), and violation history unknown (all out-of-state drivers and some North Carolina drivers are in this category). In the models shown, the three categories given are in contrast to the group having alcohol-related violations; vehicle age, last two digits of the vehicle model year; vehicle curb weight, expressed in hundreds of pounds and including a squared term to capture the curvilinear relationship between vehicle weight and accident involvement. S.E. Prob. Coeff. S.E. Prob. Coeff. S.E. Prob. Coeff. S.E. Prob. Contrast group for violations is “Has alcohol violation.” Vehicle weight is calibrated in hundredweights. The outcome variable for the logistic regression equations was a dichotomous indicator of fault, coded “1” for at-fault and “0” for not-at-fault drivers. For all the equations presented here, both single- and two-vehicle accidents, the comparison group is not-at-fault drivers in two-vehicle accidents. A driver was considered at fault if the investigating police officer checked one or more violations in the checklist provided on the North Carolina accident report form. (In two-vehicle accidents, cases were excluded if no violation was reported for either driver or if both drivers had violations.) driver age, including a squared term to capture the curvilinear relationship between age and accident involvement; driver gender, with males coded “1” and females coded “0”; driver violation history, with four mutually exclusive categories: no previous traffic violations, one or more previous violations not involving alcohol, at least one alcohol violation (may also include nonalcohol violations), and violation history unknown (all out-of-state drivers and some North Carolina drivers are in this category). In the models shown, the three categories given are in contrast to the group having alcohol-related violations; vehicle age, last two digits of the vehicle model year; vehicle curb weight, expressed in hundreds of pounds and including a squared term to capture the curvilinear relationship between vehicle weight and accident involvement. S.E. Prob. Coeff. S.E. Prob. Coeff. S.E. Prob. Coeff. S.E. Prob. Contrast group for violations is “Has alcohol violation.” Vehicle weight is calibrated in hundredweights. Models containing three alternative definitions of vehicle size were fitted to the data. These were vehicle weight (in hundreds of pounds), engine size (displacement expressed in cubic inches), and wheelbase (the distance between the axles in inches). To allow comparisons of their relative importance in predicting crash involvement, the improvement in goodness of fit for each model over the base model (including driver age, violation history, gender, and vehicle age) is presented in tables V.1 and V.2. As noted in the text, (1) with the exception of single-car rollovers, contributions to the model, though statistically significant in most cases, are small in comparison to most variables in the base model, and (2) different definitions are stronger depending upon the crash type being predicted. The following are GAO’s comments on DOT’s June 20, 1994, letter. 1. We share with DOT the belief that every reasonable effort should be made to reduce the incidence of rollover crashes, which, as we noted in our report, are second only to frontal collisions in deadliness. We would be as concerned as DOT if our conclusion, that car size is substantially less predictive of rollover crashes than are driver characteristics, were misinterpreted to diminish the importance of efforts to reduce the rollover propensity of vehicles. The analyses we performed differ significantly from NHTSA’s rollover research, but our conclusions are not in conflict. Our concern was with the relative contribution of car size to crash involvement. We examined this relationship in a general model that combined crash types and then separately, using the traditional analytic taxonomy of multiple vehicle, single-vehicle nonrollover, and single-vehicle rollover crashes. NHTSA, in contrast, attempted to identify the factors that differentiated single-vehicle rollover from nonrollover crashes. The vehicle factors it examined included a number of constructs derived from laboratory measurements, such as tilt table ratio, side pull ratio, and critical sliding velocity. The single area of overlap between the NHTSA analyses and ours was in the inclusion of wheelbase in NHTSA’s models and in one of our models. It is not surprising, therefore, that we arrived at different conclusions regarding the importance of different vehicle characteristics relative to driver characteristics. Nevertheless, our findings also support the relatively greater importance of vehicle characteristics in rollover crashes than in other crash types. We found that lighter vehicles were more likely to be involved in single vehicle rollovers. We further found that wheelbase was a better predictor of rollover crashes than weight. 2. DOT made two suggestions for additional analyses to supplement our single-vehicle rollover model. First, agency researchers suggested that we include in our model some roadway characteristics that were beyond the scope of the research originally requested. They also suggested that we treat all single-vehicle crashes as one crash type and then perform a second-level analysis to identify the factors that distinguish between rollover and nonrollover crashes. We performed these analyses and concluded that, while they provided important additional information about the dynamics of rollover crashes, they did not substantially alter our conclusions about the relative importance of the driver and vehicle characteristics we examined. To respond to the first suggestion, we added two roadway variables to all our models: whether the roadway was curved or straight and whether the crash occurred in a rural or urban setting. The results of these analyses are provided in appendix VII (passenger cars) and appendix VIII (light trucks and vans). As anticipated, these roadway characteristics generally contributed significantly to the predictive power of the models. Single-vehicle crashes (rollover and nonrollover) are more likely to occur on rural and on curved roadways. Two-vehicle car crashes are less likely to occur on rural roads. The addition of these predictors, however, did not change the predominant importance of driver factors over vehicle weight in predicting crash involvement. We also constructed a model combining both types of single-vehicle accidents. We included the results of this model in appendixes VII and VIII. As DOT anticipated, the single-accident model yielded results consistent with our earlier findings, that driver age and violation history play the strongest roles in involvement in single vehicle crashes, with little contribution from the various vehicle characteristics. The aggregate model, however, also finds the contribution of weight to single-vehicle accidents nonsignificant. This is the net effect of the opposing influence of weight in rollover and nonrollover accidents. As our original analyses demonstrated, heavier cars are more likely to be involved in nonrollover crashes and less likely to be involved in rollovers. We constructed a second set of models (one each for passenger cars and for light trucks and vans) to distinguish between rollover and nonrollover single-vehicle accidents. The results of these analyses are presented in appendix IX. The model produced the results anticipated by DOT—namely, that roadway characteristics and, to a lesser extent, vehicle weight are better predictors of whether a single-vehicle accident involves a rollover than the driver characteristics in our model, although the analysis did find that younger drivers were significantly more likely to be in a rollover than a nonrollover crash. Like the analysis, the interpretation of the models must be in two stages. The findings suggest that driver characteristics predominate over vehicle characteristics in placing a vehicle in a likely single-vehicle crash situation. Whether the resultant crash (if one does occur) involves a rollover is more determined by roadway and vehicle considerations.3. While DOT considers induced exposure an “excellent” method for measuring involvement risk in two-vehicle crashes, it expressed some cautions about its use for single-vehicle crashes. In particular, DOT suggested that the mix of light truck types (pickups, vans, and sport-utility vehicles) is different in urban and rural settings, and therefore three different light truck analyses should be performed. We agree that such analyses could provide valuable information, but we believe that they would unnecessarily expand the scope of this report. We reviewed the relative likelihood of fatal accidents in different types of light trucks and vans in a previous report. The additional analyses we performed, at DOT’s suggestion, that control for the urban and rural difference also address this concern. (See appendixes VII and VIII.) 4. DOT further suggested that we perform additional tests of the ability to generalize from the induced exposure method by comparing results from a larger number of state accident data bases. Our comparison of accident involvement ratios in North Carolina and Michigan (the two usable data bases readily available to us) was intended only to illustrate the relative consistency and reasonableness of the results obtained from applying this methodology. Many more such comparisons will need to be made before the exact parameters of the method’s applicability can be defined. Nevertheless, the results obtained by different researchers over the years from this approach to defining exposure are a strong argument for its general utility. DOT also suggested we include some additional details concerning our analyses and references to other related work performed by NHTSA and other researchers. We have incorporated these suggestions where appropriate. 5. This statement has been changed in the text. See page 2. The outcome variable for the logistic regression equations was a dichotomous indicator of fault, coded “1” for at-fault and “0” for not-at-fault drivers. For all the equations presented here, both single- and two-vehicle accidents, the comparison group is not-at-fault drivers in two-vehicle accidents. A driver was considered at fault if the investigating police officer checked one or more violations in the checklist provided on the North Carolina accident report form. (In two-vehicle accidents, cases were excluded if no violation was reported for either driver or if both drivers had violations.) driver age, including a squared term to capture the curvilinear relationship between age and accident involvement; driver gender, with males coded “1” and females coded “0”; driver violation history, with four mutually exclusive categories: no previous traffic violations, one or more previous violations not involving alcohol, at least one alcohol violation (may also include nonalcohol violations), and violation history unknown (all out-of-state drivers and some North Carolina drivers are in this category). In the models shown, the three categories given are in contrast to the group having alcohol-related violations; vehicle age, last two digits of the vehicle model year; vehicle curb weight, expressed in hundreds of pounds and including a squared term to capture the curvilinear relationship between vehicle weight and accident involvement; rural location, coded “1” for rural locations and “0” for mixed or urban curved roadway, coded “1” if curved, “0” otherwise. S.E. Prob. Coeff. S.E. Prob. Coeff. S.E. Prob. Coeff. S.E. Prob. Quadratic term removed since main effect only achieves significance without squared term. The outcome variable for the logistic regression equations was a dichotomous indicator of fault, coded “1” for at-fault and “0” for not-at-fault drivers. For all the equations presented here, both single- and two-vehicle accidents, the comparison group is not-at-fault drivers in two-vehicle accidents. A driver was considered at fault if the investigating police officer checked one or more violations in the checklist provided on the North Carolina accident report form. (In two-vehicle accidents, cases were excluded if no violation was reported for either driver or if both drivers had violations.) driver age, including a squared term to capture the curvilinear relationship between age and accident involvement; driver gender, with males coded “1” and females coded “0”; driver violation history, with four mutually exclusive categories: no previous traffic violations, one or more previous violations not involving alcohol, at least one alcohol violation (may also include nonalcohol violations), and violation history unknown (all out-of-state drivers and some North Carolina drivers are in this category). In the models shown, the three categories given are in contrast to the group having alcohol-related violations; vehicle age, last two digits of the vehicle model year; vehicle curb weight, expressed in hundreds of pounds; rural location, coded “1” for rural locations and “0” for mixed or urban curved roadway, coded “1” if curved, “0” otherwise. S.E. Prob. Coeff. S.E. Prob. Coeff. S.E. Prob. Coeff. S.E. Prob. Quadratic term for weight removed since main effect only achieves significance without squared term. The outcome variable for the logistic regression equations was a dichotomous indicator of vehicle rollover, coded “1” for rollover and “0” for nonrollovers. driver age, including a squared term to capture the curvilinear relationship between age and accident involvement; driver gender, with males coded “1” and females coded “0”; driver violation history, with four mutually exclusive categories: no previous traffic violations, one or more previous violations not involving alcohol, at least one alcohol violation (may also include nonalcohol violations), and violation history unknown (all out-of-state drivers and some North Carolina drivers are in this category). In the models shown, the three categories given are in contrast to the group having alcohol-related violations; vehicle age, last two digits of the vehicle model year; vehicle curb weight, expressed in hundreds of pounds and including a squared term to capture the curvilinear relationship between vehicle weight and accident involvement. rural location, coded “1” for rural locations and “0” for mixed or urban curved roadway, coded “1” if curved, “0” otherwise. Coeff. S.E. Prob. Coeff. S.E. Prob. Quadratic term removed since main effect only achieves significance without squared term. Robert E. White, Assistant Director Beverly A. Ross, Project Manager Martin T. Gahart, Project Adviser The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined the factors that contribute to vehicular crashes, focusing on: (1) drivers' age, gender, and driving history; and (2) vehicle size and age. GAO found that: (1) driver characteristics far outweigh vehicle factors in predicting vehicular crashes; (2) drivers who are younger, male, and have a history of traffic violations, particularly alcohol violations, are more likely to be involved in single-vehicle crashes; (3) drivers 65 and older are the second most likely group to be involved in crashes; (4) older vehicles are slightly more at risk for crash involvement, but other factors that are linked to vehicle age may also play a part in older cars' involvement in crashes; (5) light cars are three times more likely to be involved in single-vehicle rollover crashes than heavy cars; (6) a car's wheel-base or engine size is a better predictor of crash involvement; and (7) light trucks and vans show a similar crash-involvement pattern except that driver gender is generally not a significant factor in predicting light truck and van crashes..
Medicare covers SNF care for beneficiaries who need daily skilled nursing care or therapy for conditions related to a hospital stay of at least 3 consecutive calendar days, if the hospital discharge occurred within a specific period—generally, no more than 30 days—prior to admission to the SNF. For qualified beneficiaries, Medicare will pay for medically necessary SNF services, including room and board; nursing care; and ancillary services, such as drugs, laboratory tests, and physical therapy, for up to 100 days per spell of illness. In 2002, beneficiaries are responsible for a $101.50 daily copayment after the 20th day of SNF care, regardless of the cost of services received. Eighty-eight percent of SNFs are freestanding—that is, not attached to a hospital. The remainder are hospital-based. SNFs differ by type of ownership: 66 percent of SNFs are for-profit entities, 28 percent of SNFs are not-for-profit, and a small fraction of SNFs—about 5 percent—are government-owned. About three-fifths of SNFs are owned or operated by chains—corporations operating multiple facilities. To be a SNF, a facility must meet federal standards to participate in the Medicare program. SNFs provide skilled care to Medicare patients and usually also provide care to Medicaid and private pay patients. Medicare pays for a relatively small portion of patients cared for in SNFs—about 11 percent. Over 66 percent of SNF patients have their care paid for by Medicaid, and another 23 percent have their care paid for by other sources or pay for the care themselves. In the Balanced Budget Act of 1997 (BBA), the Congress established the PPS for SNFs. Under the PPS, SNFs receive a daily payment that covers almost all services provided to Medicare beneficiaries during a SNF stay, which is adjusted for geographic differences in labor costs and differences in the resource needs of patients. Adjustments for resource needs are based on a patient classification system that assigns each patient to 1 of 44 payment groups, known as resource utilization groups (RUG). For each group, the daily payment rate is the sum of the payments for three components: (1) the nursing component, which includes costs related to nursing as well as to medical social services and nontherapy ancillary services, (2) the therapy component, which includes costs related to occupational, physical, and speech therapy, and (3) the routine cost component, which includes costs for capital, maintenance, and food. The routine cost component is the same for all patient groups, while the nursing and therapy components vary according to the expected needs of each group. Before the 16.66 percent increase provided by BIPA took effect, the nursing component varied from 26 percent to 74 percent of the daily payment rate, depending on the patient’s RUG. In 2001, Medicare expenditures on SNF care were $13.3 billion. The 16.66 percent increase in the nursing component raised Medicare payments about $1 billion annually—about 8 percent of Medicare’s total annual spending on SNF care. The increase in the nursing component is one of several temporary changes made to the PPS payment rates since the PPS was implemented in 1998. The Medicare, Medicaid, and SCHIP Balanced Budget Refinement Act of 1999 (BBRA) raised the daily payment rates by 20 percent for 15 high-cost RUGs beginning in April 2000. BBRA also increased the daily rate for all RUGs by 4 percent for fiscal years 2001 and 2002. BIPA upped the daily payment rates by 6.7 percent for 14 RUGs, effective April 2001. This increase was budget neutral; that is, it modified BBRA’s 20 percent increase for 15 RUGs by taking the funds directed at 3 rehabilitation RUGs and applying those funds to all 14 rehabilitation RUGs. Two of these temporary payment changes, the 20 percent and 6.7 percent increases, will remain in effect until CMS refines the RUG system. CMS has announced that, although it is examining possible refinements, the system will not be changed for the 2003 payment year. In providing care to their patients, SNFs employ over 850,000 licensed nurses and nurse aides nationwide. Licensed nurses include RNs and LPNs. RNs generally manage patients’ nursing care and perform more complex procedures, such as starting intravenous fluids. LPNs provide routine bedside care, such as taking vital signs and supervising nurse aides. Aides generally have more contact with patients than other members of the SNF staff. Their responsibilities may include assisting individuals with eating, dressing, bathing, and toileting, under the supervision of licensed nursing and medical staff. Several studies have shown that nursing staff levels are linked to quality of care. The Social Security Act, which established and governs the Medicare program, requires that SNFs have sufficient nursing staff to provide nursing and related services to attain or maintain the highest practicable physical, mental, and psychosocial well-being of each patient, as determined by patient assessments and individual plans of care. More specifically, SNFs must have an RN on duty for at least 8 consecutive hours a day for 7 days per week, and must have 24 hours of licensed nurse coverage per day. SNFs also must designate an RN to serve as the director of nursing on a full-time basis, and must designate a licensed nurse to serve as a charge nurse on each tour of duty. SNF staffing varies by type of facility and by state. Hospital-based SNFs tend to have higher staffing ratios than other SNFs. In 2001, hospital-based SNFs provided 5.5 hours of nursing time per patient day, compared with 3.1 hours among freestanding SNFs. Hospital-based SNFs also rely more heavily on licensed nursing staff than do freestanding facilities, which rely more on nurse aides. Staffing also differs by state—from 2 hours and 54 minutes per patient day in South Dakota in 2000 to 4 hours and 58 minutes per patient day in Alaska. Many states have established their own nursing staff requirements for state licensure, which vary considerably. Some states require a minimum number of nursing hours per patient per day, while others require a minimum number of nursing staff relative to patients. Some states’ requirements apply only to licensed nurses, while others apply to nurse aides as well. Some states also require an RN to be present 24 hours per day, 7 days per week. As of 1999, 37 states had nursing staff requirements that differed from federal requirements. Since 1998, many states have raised their minimum staffing requirements or have implemented other changes aimed at increasing staffing in nursing homes, such as increasing workers’ wages or raising reimbursement rates for providers whose staffing exceeds minimum requirements. While states have set minimum requirements for nursing staff, there are indications of an emerging shortage of nursing staff, particularly RNs, in a variety of health care settings. The unemployment rate for RNs in 2000 was about 1 percent—very low by historical standards. As a result, SNFs must compete with other providers, such as hospitals, for a limited supply of nursing staff. According to associations representing the industry, nursing homes have had difficulty recruiting and retaining staff. The American Health Care Association (AHCA) reported vacancy rates for nursing staff in nursing homes for 2001 ranging from 11.9 percent for aides to 18.5 percent for staff RNs. Labor shortages are generally expected to result in increased compensation—wages and benefits—as employers seek to recruit new workers and retain existing staff. Our analysis of Bureau of Labor Statistics (BLS) data shows that, from 1999 to 2000, average wages for nurses and aides employed by the nursing home industry increased by 6.3 percent, compared to 2.9 percent among workers in private industry and state and local government. Industry officials, citing a survey they commissioned, told us that wages have risen more rapidly since 2000. In general, SNF staffing changed little after April 1, 2001, when the increase in the nursing component of the PPS payment took effect. There was no substantial change in SNFs’ overall staffing ratios, though their mix of nursing hours shifted somewhat: SNFs provided slightly less RN time and slightly more LPN and nurse aide time in 2001. For most categories of SNFs—such as freestanding SNFs and SNFs not owned by chains— increases in staffing ratios were small. Although SNFs with relatively low staffing ratios in 2000 increased their staffing ratios in 2001, SNFs with relatively high staffing ratios decreased their staffing. Our analysis indicates that the nursing component payment increase was unlikely to have been a factor in these staffing changes. Unlike most facilities nationwide, SNFs in four states increased their staffing by 15 or more minutes per patient day, following payment or policy changes in three of the states aimed at increasing or maintaining SNF nursing staff. No substantial change in SNFs’ overall staffing ratios occurred after the nursing component payment was increased. Between 2000 and 2001, SNFs’ average amount of nursing time changed little, remaining slightly under 3 and one-half hours per patient day. Although there was an increase of 1.9 minutes per patient day, it was not statistically significant. (See table 1.) According to our calculations, this change was less than the estimated average increase, across all SNF patients, of about 10 minutes per patient day that could have resulted if SNFs had devoted the entire nursing component increase to more nursing time. There was a small shift in the mix of nursing time that SNFs provided. On average, RN time decreased by 1.7 minutes per patient day. This was coupled with slight increases in LPN and nurse aide time, which rose by 0.7 and 2.9 minutes per patient day, respectively. For most categories of SNFs, changes in staffing ratios were small. For example, freestanding facilities, which account for about 90 percent of SNFs nationwide, increased their nursing time by 2.1 minutes per patient day on average. Nonchain SNFs had an increase of 3.9 minutes per patient day. Hospital-based facilities and those owned by chains had nominal changes in nursing time. The changes in staffing for for-profit, not-for- profit, and government-owned facilities also were small. (See app. II.) The share of a SNF’s patients who were covered by Medicare was not a factor in whether facilities increased their nursing time. SNFs that relied more on Medicare would have received a larger increase in revenue due to the nursing component change, and might have been better able than others to raise staffing ratios. However, we found that freestanding SNFs in which Medicare paid for a relatively large share of patients increased their nursing time by 1.3 minutes per patient day—less than SNFs with somewhat smaller shares of Medicare patients, and not substantially more than SNFs with the smallest share of Medicare patients. (See table 2.) Similarly, SNFs’ financial status was not an important factor affecting changes in nursing time. Although SNFs with higher total margins in 2000—that is, those with revenues substantially in excess of costs—might have been best able to afford increases in nursing staff, those with the highest total margins did not raise their staffing substantially more than others. Changes in nursing time were minimal, regardless of SNFs’ financial status in 2000. For SNFs in the three groups with the highest margins, increases were about 3 to 4 minutes per day, compared to 2 minutes per day for those with the lowest margins. (See table 3.) SNFs with relatively low initial staffing ratios—which may have had the greatest need for more staff—increased their staffing ratios substantially, while SNFs that initially were more highly staffed had a comparable decrease in staffing. Among freestanding SNFs that had the lowest staffing ratios in 2000, staffing time increased by 18.9 minutes per patient day. (See table 4.) Nearly all of the increase—over 15 minutes—was due to an increase in nurse aide time. LPN time increased by 3.2 minutes and RN time by 11 seconds on average. Among facilities with the highest staffing ratios in 2000, staffing decreased by 17.7 minutes. For these SNFs, as for those with the lowest staffing ratios, most of the overall change occurred among nurse aides: aide time decreased by over 10 minutes in 2001, while LPN and RN time decreased by 2.7 and 4.6 minutes, respectively. Despite the staffing increases among lower-staffed facilities, our analysis indicates that these staffing changes may not have resulted from the nursing component payment increase. We found that similar staffing changes occurred between 1999 and 2000—prior to the nursing component increase. Low-staffed facilities increased their staffing by 15.2 minutes per patient day in 2000, while high-staffed facilities decreased their staffing by 19.8 minutes. The changes that occurred during the two periods were similar, suggesting that the payment increase probably did not cause the change in the latter period. Unlike most facilities nationwide, SNFs in four states—Arkansas, Nebraska, North Dakota, and Oklahoma—increased their staffing by 15 to 27 minutes per patient day, on average. These increases could be related to state policies: according to state officials, three of the states had made Medicaid payment or policy changes aimed at increasing or maintaining facilities’ nursing staff. North Dakota authorized a payment rate increase, effective July 2001, that could be used for staff pay raises or improved benefits. Oklahoma increased its minimum requirements for staffing ratios in both September 2000 and September 2001, provided added funds to offset the costs of those increases, and raised the minimum wage for nursing staff such as RNs, LPNs, and aides. Arkansas switched to a full cost-based reimbursement system for Medicaid services in January 2001, in part to provide facilities with stronger incentives to increase staffing; the state had previously relied on minimum nurse staffing ratios. In Nebraska, no new state policies specific to nursing staff in SNFs were put in place during 2000 or 2001. The change to the nursing component of the SNF PPS payment rate was one of several increases to the rates since the PPS was implemented in 1998. This temporary increase, enacted in the context of payment and workforce uncertainty, was intended to encourage SNFs to increase their nursing staff, although they were not required to spend the added payments on staff. In our analysis of the best available data, we did not find a significant overall increase in nurse staffing ratios following the change in the nursing component of the Medicare payment rate. Although the payment change could have paid for about 10 added minutes of nursing time per patient day for all SNF patients, we found that on average SNFs increased their staffing ratios by less than 2 minutes per patient day. Nurse staffing ratios fell in some SNFs during this period and increased in others by roughly an equal amount—the same pattern that occurred before the payment increase took effect. Our analysis—overall and for different types of SNFs—shows that increasing the nursing component of the Medicare payment rate was not effective in raising nurse staffing. Our analysis of available data on SNF nursing staff indicates that, in the aggregate, SNFs did not have significantly higher nursing staff time after the increase to the nursing component of Medicare’s payment. We believe that the Congress should consider our finding that increasing the Medicare payment rate was not effective in raising nurse staffing as it determines whether to reinstate the increase to the nursing component of the Medicare SNF rate. We received written comments on a draft of this report from CMS and oral comments from representatives of the American Association of Homes and Services for the Aging (AAHSA), which represents not-for-profit nursing facilities; AHCA, which represents for-profit and not-for-profit nursing facilities; and the American Hospital Association (AHA), which represents hospitals. CMS said that our findings are consistent with its expectations as well as its understanding of other research in this area. CMS also stated that our report is a useful contribution to the ongoing examination of SNF care under the PPS. CMS’s comments appear in appendix III. Representatives from the three associations who reviewed the draft report shared several concerns. First, indicating that our statements were too strong given the limitations of the study, they objected to the report’s conclusions and matter for congressional consideration. Second, they noted that the draft should have included information about the context in which SNFs were operating at the time of the Medicare payment increase, specifically, the nursing shortage and SNF staff recruitment and retention difficulties. Finally, they noted that SNFs could have used the increased Medicare payments to raise wages or improve benefits rather than hire additional nursing staff. The industry representatives expressed several concerns about the limitations of our data and analysis. The AAHSA representatives noted that, for individual SNFs, the accuracy of OSCAR is questionable; they agreed, however, that the average staffing ratios we reported for different types of SNFs looked reasonable and were consistent with their expectations. The AHA representatives said that, while OSCAR data are adequate for examining staffing ratios, we should nonetheless have used other sources of nurse staffing data—such as payroll records and Medicaid cost reports—before making such a strong statement to the Congress. The AHCA representatives noted that, due to the limitations of OSCAR data, our analyses of staffing ratios reflect staffing for all SNF patients rather than staffing specifically for SNF patients whose stays are covered by Medicare. They stressed that the small increase in staffing for patients overall could have represented a much larger increase for Medicare- covered SNF patients. In addition, representatives from both AHCA and AHA were concerned that our period of study after the payment increase—May through December 2001—was too short to determine whether SNFs were responding to the added payments. They also cited delays in SNFs being paid under the increased rates as an explanation for our findings. The AHCA representatives further noted that the lack of change in staffing was not surprising, given the short period, and that the payment increase was temporary, applied to only one payer, and affected only about 10 to 12 percent of SNFs’ business. AAHSA representatives noted that, to be meaningful, staffing ratios must be adjusted for acuity— the severity of patients’ conditions. Representatives from all three groups also stated that the report lacked sufficient information on contextual factors that could have affected SNF staffing ratios during our period of study. They said that we should have provided information on the nursing shortage as well as on SNF staff recruitment and retention difficulties. They further stated that SNFs’ difficulties in recruiting and retaining staff could explain why we found little change in nurse staffing ratios. The AAHSA representatives were concerned that the report omitted information on the economic slowdown’s effect on state budgets and Medicaid payment rates, which could have discouraged SNFs from hiring during the period of the increased nursing component. Finally, both AAHSA and AHA representatives commented that the report gave too little attention to state minimum staffing requirements, indicating that SNFs would be more responsive to those requirements than to the Medicare payment increase. The AAHSA representatives noted that facilities may have increased their nursing staff to meet state minimum staffing requirements prior to the Medicare increase. The AHA representatives stated that we may not have found staffing increases because, when states require a minimum level of staff, facilities tend to staff only to that minimum. They also commented that state requirements may have had a greater effect on staffing than the nursing component increase, which was temporary and had only been in effect for a limited time. Representatives from all three groups noted that facilities could have opted to raise wages, improve benefits, or take other steps to recruit or retain staff, rather than hire additional nurses or aides. AHA added that we did not consider whether, prior to the rate increase, nurse staffing was adequate; if it was, SNFs may have chosen to spend the added Medicare payments on retention rather than on hiring. In addition, AASHA and AHCA representatives noted that we did not address what would happen to nursing staff and margins if the payment increase were not in place. The AAHSA representatives stated that, without the increase, staffing might have decreased. AHCA representatives noted that we should have considered the implications for SNF margins of not continuing the payment increase. As noted throughout the draft report, in conducting our study we considered the limitations of the data and the analyses we could perform. We therefore tested whether these limitations affected our results. Taking account of those tests and the consistency of our findings across categories of SNFs, we determined that the available evidence was sufficient to conclude that the increased payment did not result in higher nursing staff time. Our evidence consistently shows that staffing ratios changed little after the nursing component payment increase was implemented. However, we modified our conclusions to reiterate the limitations of our study. Regarding the representatives’ specific concerns about the limitations of our data and analysis: In the draft report, we detailed our efforts to correct OSCAR data errors. We have no evidence that OSCAR data are biased in the aggregate or that errors in OSCAR data would have understated the change in nurse staffing ratios. In the draft report we noted that neither payroll records nor Medicaid cost reports were feasible sources of staffing data for this study. We have no reason to think that our results would have been different if we had used those data sources because a HCFA study found that those other sources yielded comparable aggregate staffing levels to those in OSCAR. We believe that the data from OSCAR were appropriate for examining staffing ratio changes because OSCAR is the only nationally uniform data source that allowed us to compare staffing ratios before and after the payment increase. In the draft report, we stated that while nurse staffing ratios apply to all SNF patients and not just Medicare patients, we found no relationship between changes in staffing ratios and the percentage of a SNF’s patients paid for by Medicare. Specifically, staffing increases were no larger in SNFs with a greater percentage of Medicare patients than in those with a smaller percentage of Medicare patients. The staffing changes in SNFs surveyed in the months just after the payment increase was implemented differed little from staffing changes of those SNFs surveyed later in 2001. Because we found no relationship between SNFs’ staffing ratio changes and the amount of time that had passed since the payment increase (which ranged from 1 to 9 months), we believe that our period of study was sufficiently long to determine whether SNFs were responding to the payment increase. We have added information on this analysis to the report. We agree that adjusting for patients’ acuity is particularly important for comparing staffing among different facilities; however, acuity averaged over all facilities varies little over short periods. Moreover, unless patients’ acuity declined after the nursing component increase—and we have no evidence that it did—adjusting for acuity would not have affected our finding that nursing staff time changed little. Regarding representatives’ concerns that we did not include sufficient information on external factors affecting SNFs: We added information to the report on issues related to the nursing workforce. Hiring difficulties would not have prevented SNFs from expanding the hours of their existing nursing staff or using temporary nurses and aides from staffing agencies—which would have been reflected in staffing ratios. With respect to the possible influence of a weak economy on Medicaid payments and SNF staffing levels, we noted in the draft report that the pattern of nursing staff changes from 2000 to 2001 was similar to the pattern from 1999 to 2000—a period when the economy was considerably stronger. If SNFs increased nursing staff in response to new state requirements during 2001, our study would have attributed these increases to the Medicare payment change. Regarding the representatives’ statements about alternate ways SNFs could have used the increased Medicare payments: To the extent that SNFs used the added Medicare payments for higher wages or benefits, they may have reduced staff vacancies, which in turn may have resulted in higher staffing ratios. However, we found little change in nurse staffing ratios after the Medicare payment increase. Regarding the representatives’ statements about the adequacy of SNF staffing: Because staffing adequacy was not within the scope of our study, we did not consider whether staffing was adequate prior to the rate increase, or whether this influenced SNFs’ hiring decisions. The Congress directed CMS to address this issue, which it did in two reports. The first report, published in 2000, suggested that staffing might not be adequate in a significant number of SNFs. This was reaffirmed in CMS’s recent report. CMS, AAHSA, AHCA, and AHA also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Administrator of CMS, interested congressional committees, and other interested parties. We will also provide copies to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions, please call me at (202) 512-7114. Other GAO contacts and staff acknowledgments are listed in appendix IV. This appendix describes the selection of the data source for our analysis, the characteristics of that data source, and procedures used to verify data accuracy and make adjustments. To assess the impact on nurse staffing ratios of the April 1, 2001, increase in the nursing component of the SNF payment, we needed a nationally uniform data source that included the number of patients and the number of nursing staff (full-time equivalents (FTE)) or nursing hours, for two periods—before April 1, 2001, to establish a baseline, and after April 1, 2001. We considered several sources of nursing staff data, including SNF payroll data, Medicaid cost reports, and CMS’s OSCAR system. We determined that payroll records could not be used for several reasons. CMS has collected and analyzed nursing home payroll data in several states and has found that it is difficult to ensure that the staffing data refer to hours worked (as required for an analysis of nurse staffing ratios) rather than hours paid, which includes time such as vacation and sick leave. CMS also found that although current nursing home payroll records were usually available, older records were difficult to obtain; consequently, it is unlikely that we would have been able to get records prior to the rate increase. Finally, payroll records do not include information on the number of patients and would have had to be supplemented with other data. Similarly, Medicaid cost reports were not an appropriate source of data. While these reports by SNFs to state Medicaid agencies contain data on both patients and nursing staff, Medicaid cost reports do not permit a comparison of staffing ratios before and after the 16.66 percent increase in the nursing component because these reports cover a 12-month period that cannot be subdivided. Furthermore, these reports do not contain nationally uniform staffing data because the categories and definitions differ from state to state. Finally, the 2001 reports were not available in time for our analysis. OSCAR is the only uniform data source that contains data on both patients and nursing staff. Moreover, OSCAR data are collected at least every 15 months, allowing us to compare staffing ratios before and after the 16.66 percent increase in the nursing component. The states and the federal government share responsibility for monitoring compliance with federal standards in the nation’s roughly 15,000 SNFs. To be certified for participation in Medicare, Medicaid, or both, a SNF must have had an initial survey as well as subsequent, periodic surveys to establish compliance. On average, SNFs are surveyed every 12 to 15 months by state agencies under contract to CMS. In a standard survey, a team of state surveyors spends several days at the SNF, conducting a broad review of care and services to ensure that the facility complies with federal standards and meets the assessed needs of the patients. Data on facility characteristics, patient characteristics, and staffing levels are collected on standard forms. These forms are filled out by each facility at the beginning of the survey and are certified by the facility as being accurate. After the survey is completed, the state agency enters the data from these forms into OSCAR, which stores data from the most current and previous three surveys. Although OSCAR was the most suitable data source available for our analysis, it has several limitations. First, OSCAR provides a 2-week snapshot of staffing and a 1 day snapshot of patients at the time of the survey, so it may not accurately depict the facility’s staffing and number of patients over a longer period. Second, staffing is reported across the entire facility, while the number of patients are reported only for Medicare- and Medicaid-certified beds. OSCAR, like other data sources, does not distinguish between staffing for Medicare patients and staffing for other patient groups. Finally, the Health Care Financing Administration (HCFA) reported that OSCAR data are unreliable at the individual SNF level. However, the agency’s recent analysis has concluded that the OSCAR- based staffing measures appear “reasonably accurate” at the aggregate level (e.g., across states). Neither CMS nor the states attempt to verify the accuracy of the staffing data regularly. In addition to limitations inherent in OSCAR data, our analysis was limited in several ways. First, our sample included only SNFs for which OSCAR data were available both before and after the 16.66 percent increase in the nursing component took effect. Second, our analysis of staffing ratios after the increase took effect was limited to data collected from May through December 2001. As a result, we only reviewed data for 8 months after the payment increase was implemented, although our results do not appear to be affected by any seasonal trends in staffing. We were not able to review data for a later period when facilities might have used the payment increase differently. Finally, due to data entry lags, when we drew our sample in January 2002, OSCAR did not include data from some facilities surveyed from May through December 2001. To determine the change in nurse staffing ratios, we selected all facilities surveyed from May through December 2001 that also had a survey during 2000, which could serve as the comparison. This sample contained OSCAR data for 6,522 facilities. (See table 5.) Although not a statistical sample that can be projected to all SNFs using statistical principles, the sample is unlikely to be biased because it was selected on the basis of survey month. Our sampling procedure, in which selection depended solely on the time of survey, was unlikely to yield a sample with characteristics that differ substantially from those of the entire population of SNFs. We found no significant differences between these 6,522 SNFs and the 13,454 SNFs that were surveyed in calendar year 2000, in terms of various characteristics— the proportion that are hospital-based, the proportion that are for-profit, the share of a facility’s patients that are paid for by Medicare, and the capacity of the facilities. However, our sample was not distributed across states like the population of SNFs. (See table 6.) This may be because state agencies differ in the amount of time required to complete entry of survey data into OSCAR. In addition, we excluded from our sample 449 SNFs that, based on their 2000 Medicare claims data, had received payments from Medicare that were not determined under the PPS. The resulting sample had 6,073 facilities—over one-third of all SNFs. To assess the accuracy of the OSCAR data in our sample, we applied decision rules developed by CMS for its study of minimum nurse staffing ratios to identify facilities with data that appeared to represent data entry or other reporting errors. In addition, we identified facilities in our sample that had changes in their nurse staffing ratios greater than 100 percent, but that did not report 100 percent changes in both total patients and total beds. Using these rules, we identified 570 facilities for review. For 536 of these facilities, we obtained the original forms completed by SNF staff and used for entering data into OSCAR, from the state survey agencies. We compared the data on the forms to the OSCAR entries and identified 159 facilities with data entry errors. For these facilities, we corrected the data, although 12 continued to be outliers and were excluded. For 179 facilities, we telephoned the SNF to verify its data; 65 facilities confirmed that OSCAR correctly reported their data. Based on the information gathered in these calls, we were able to correct the data for an additional 47 facilities. We also excluded 35 facilities for which we could not correct the data. In addition, we excluded 915 SNFs with more total beds than certified beds because they may have inaccurate staffing ratios. Other facilities were excluded because we did not receive their forms, we were unable to call the SNFs, or we did not receive replies from them. After these exclusions, our final sample contained 4,981 SNFs. (See table 7.) We calculated nurse staffing ratios—hours per patient day—for each facility by dividing the total nursing hours by the estimated number of patient days. We calculated nurse staffing ratios for all nursing staff as well as for each category of staff: RNs, LPNs, and aides. We also calculated the change in these ratios for each facility in our sample. We analyzed these changes in nurse staffing ratios overall and for several categories of SNFs, including for-profit, not-for-profit, and government-owned facilities. We also analyzed these changes based on each facility’s prior year staffing ratio. Finally, we supplemented the staffing data with cost and payment data from Medicare cost reports for 2000 and related the changes in nurse staffing ratios to each SNF’s total margin—a measure of its financial status. We tested whether staffing ratio changes from 2000 to 2001 were statistically significant—that is, statistically distinguishable from zero. In addition, for the analyses of SNFs’ prior year staffing and their financial status, we tested whether, between any two groups of SNFs, the difference in their staffing ratio changes was statistically significant. Major contributors to this report were Robin Burke, Jessica Farb, and Dae Park. Skilled Nursing Facilities: Providers Have Responded to New Payment System By Changing Practices. GAO-02-841. Washington, D.C.: August 23, 2002. Nursing Homes: Quality of Care More Related to Staffing than Spending. GAO-02-431R. Washington, D.C.: June 13, 2002. Nursing Homes: Federal Efforts to Monitor Resident Assessment Data Should Complement State Activities. GAO-02-279. Washington, D.C.: February 15, 2002. Nursing Workforce: Emerging Nurse Shortages Due to Multiple Factors. GAO-01-944. Washington, D.C.: July 10, 2001. Nursing Homes: Success of Quality Initiatives Requires Sustained Federal and State Commitment. GAO/T-HEHS-00-209. Washington, D.C.: September 28, 2000. Nursing Homes: Sustained Efforts Are Essential to Realize Potential of the Quality Initiatives. GAO/HEHS-00-197. Washington, D.C.: September 28, 2000.
The nation's 15,000 skilled nursing facilities (SNF) play an essential role in our health care system, providing Medicare-covered skilled nursing and rehabilitative care each year for 1.4 million Medicare patients who have recently been discharged from acute care hospitals. In recent years, many analysts and other observers, including members of Congress, have expressed concern about the level of nursing staff in SNFs and the impact of inadequate staffing on the quality of care. GAO's analysis of available data shows that, in the aggregate, SNFs' nurse staffing ratios changed little after the increase in the nursing component of the Medicare payment took effect. Overall, SNFs' average nursing time increased by 1.9 minutes per patient day, relative to their average in 2000 of about 3 and one-half hours of nursing time per patient day. For most SNFs, increases in staffing ratios were small. Further, GAO found that the share of SNF patients covered by Medicare was not a factor in whether facilities increased their nursing time. Similarly, SNFs that had a total revenues considerably in excess of costs before the added payments took effect did not increase their staffing substantially more than others.
Over the last decade, DOD has been managing many challenging space systems acquisitions. A long-standing problem for the department is that program costs have tended to increase significantly from original cost estimates. In recent years, DOD has overcome many of the problems that had been hampering program development, and has begun to launch many of these satellites. However, the large cost growth of these systems continues to affect the department. Figure 1 compares the original cost estimates with current cost estimates for some of the department’s major space acquisition programs. The gap between the estimates in figure 1 represents money that the department was not planning to spend on these programs, and did not have available to invest in other efforts. The gap in estimates is fairly stable between fiscal years 2014-2018, a result of the fact that most programs are mature and in a steady production phase. This figure does not include programs that are still in the early stages of planning and development. In past reports, we have identified a number of causes of acquisition problems. For example, in past years, DOD has tended to start more weapon programs than is affordable, creating a competition for funding that focuses on advocacy at the expense of realism and sound management. DOD has also tended to start its space programs before it has the assurance that the capabilities it is pursuing can be achieved within available resources and time constraints. There is no way to accurately estimate how long it would take to design, develop, and build a satellite system when key technologies planned for that system are still in relatively early stages of discovery and invention. Finally, programs have historically attempted to satisfy all requirements in a single step, regardless of the design challenges or the maturity of the technologies necessary to achieve the full capability. DOD’s preference to make larger, complex satellites that perform a multitude of missions has stretched technology challenges beyond current capabilities in some cases. Our work has recommended numerous actions that can be taken to address the problems we identified. Generally, we have recommended that DOD separate technology discovery from acquisition, follow an incremental path toward meeting user needs, match resources and requirements at program start, and use quantifiable data and demonstrable knowledge to make decisions to move to next phases. We have also identified practices related to cost estimating, program manager tenure, quality assurance, technology transition, and an array of other aspects of acquisition program management that could benefit space programs. DOD has generally concurred with our recommendations, and has undertaken a number of actions to establish a better foundation for acquisition success. For example, we reported in the past that, among other actions, DOD created a new office within the Undersecretary of Defense for Acquisition, Technology and Logistics to focus attention on oversight for space programs and it eliminated offices considered to perform duplicative oversight functions. We have also reported in the past that the Air Force took actions to strengthen cost estimating and to reinstitute stricter standards for quality. Most of DOD’s major satellite programs are in mature phases of acquisition, and some of the significant problems of past years, such as cost and schedule growth, are not currently as prevalent. Table 1 describes the status of the space programs we have been tracking in detail. While many programs have overcome past problems, some of the major space programs have encountered significant challenges in the last year and some delays in development and production. For example: The Air Force’s Space Fence program office is developing a large ground-based radar that is expected to improve on the performance of and replace the Air Force Space Surveillance System, which became operational in 1961 and was recently shut down. The Space Fence radar will emit radio frequencies upward to space, from ground-based radar sites, to detect and track more and smaller Earth-orbiting objects than is currently possible, and provide valuable space situational awareness data to military and civilian users. The Air Force had originally planned to award a contract for Space Fence systems development in July 2012, but due to internal program reviews and budget re-prioritizations, this date has been delayed to May 2014. In addition, the number of radar sites planned has been reduced from two to one, though DOD plans to have an option under the system development contract to build a second site if needed. In April 2013, DOD proposed canceling the Missile Defense Agency’s Precision Tracking Space System (PTSS) because of concerns with the program’s high-risk acquisition strategy and long-term affordability. PTSS was intended to be a satellite system equipped with infrared sensors that would track ballistic missiles through their emitted heat. The planned satellite system would consist of a constellation of nine satellites in orbit around the earth’s equator. We reported in July 2013 that the decision to propose canceling the PTSS program was based on an evaluation of the acquisition, technical, and operational risks of the PTSS program. Specifically, DOD’s evaluation assessed the PTSS cost, schedule, technical design, and acquisition strategy to identify whether risks could challenge the program’s ability to acquire, field, and sustain the system within planned cost and schedule constraints. The evaluation also determined that the PTSS program had significant technical, programmatic, and affordability risks. The program officially ceased operations in October 2013. The Air Force has nearly completed its analysis of alternatives to determine the direction for space based environmental monitoring, which will be a follow-on program for the Defense Meteorological Satellite Program (DMSP). Through this analysis, the Air Force analyzed various options that included, but were not limited to, a traditional procurement of a weather satellite similar to the existing DMSP satellites, or a disaggregated approach using small satellites and hosted payload opportunities. According to the Air Force, the study was completed in the fall of 2013 and is awaiting final approval. The MUOS program plans to launch a third satellite in January 2015, which represents a delay of 6 months due to a production issue on the third satellite. Specifically, the third satellite failed system- and subsequent unit-level testing after rework last year and the program determined the root cause to be a manufacturing deficiency on a component critical for the operation of the satellite’s ultra-high- frequency legacy communications payload. The program is replacing the component. According to the MUOS program office, the program is on track to meet the launch schedule of subsequent satellites, which is important because most of the communications satellites that MUOS is replacing are past their design lives. Synchronizing deliveries of MUOS satellites with compatible Army Handheld, Manpack, Small Form Fit (HMS) terminals remains a challenge. Currently over 90 percent of the first satellite’s on-orbit capabilities are being underutilized because of terminal program delays. Consequently, military forces are relying on legacy communication terminals and are not able to take advantage of the superior capabilities offered by the MUOS satellites. Operational testing and initial fielding of MUOS-capable HMS terminals is planned for fiscal year 2014, with a production decision expected in September 2015. We have reported in the past that DOD and Congress are taking steps to reform and improve the defense acquisition system, and in the past year additional actions have been taken towards these goals. In November 2013, DOD published an update to its instruction 5000.02, which provides acquisition guidance for DOD programs. With this update, DOD hoped to create an acquisition policy environment that will achieve greater efficiency and productivity in defense spending. Air Force officials noted that, for satellite programs, there are two major changes that they believe will improve the acquisition process. First, the instruction was changed to formally allow satellite programs to combine two major program milestones, B and C, which mark the beginning of the development and production phases, respectively.Force, satellite programs have typically seen a great deal of overlap in the development and production phases, mainly because they are buying small quantities of items. They are often not able to produce a prototype to be fully tested because of the high costs of each article, so the first satellite in a production is often used both for testing and operations. Air Force officials believe that this change to the acquisition guidance will allow for streamlining of satellite development and production processes, and provide more efficient oversight without sacrificing program requirements. GAO has not assessed the potential effects of this change. In the past, we have reported that committing a program to production According to the Air without a substantive development phase may increase program cost and schedule risks, and we plan to look at the impacts of this change as it begins to be implemented. A second change made this year, according to Air Force officials, is the requirement that DOD programs, including space programs, undergo independent development testing. While development testing for DOD programs is not new to this policy revision, now the testing organization will be an independent organization outside the program office. For space programs, this organization will be under the Program Executive Officer for Space, and will report their findings directly to that office, providing what the Air Force believes will be an independent voice on a program’s development status. The Air Force is confident that these changes will provide benefits to program oversight, although because these are recent changes, we have not yet assessed their potential for process improvements. In addition, DOD is adopting new practices to reduce fragmentation of its satellite ground control systems, which adds oversight to a major development decision. Last year we reported that DOD’s satellite ground control systems were potentially fragmented, and that standalone systems were being developed for new satellite programs without a formal analysis of whether or not the satellite control needs could be met with existing systems. In the National Defense Authorization Act for Fiscal Year 2014, Congress placed more oversight onto this process by requiring a cost-benefit analysis for all new or follow-on satellite systems using a dedicated ground control system instead of a shared ground This new requirement should improve oversight into control system.these systems’ development, and may reduce some unnecessary duplication of satellite control systems. According to Air Force officials, the first program to go through this process was the Enhanced Polar System, and all future satellite programs will include this cost-benefit analysis in their ground system planning. In addition, the Act directed DOD to develop a DOD-wide long-term plan for satellite ground control systems. Additionally, the Defense Space Council continues with its architecture reviews in key space mission areas. According to Air Force officials, the Council is the principal DOD forum for discussing space issues, and brings together senior-level leaders to discuss these issues. These architecture reviews are to inform DOD’s programming, budgeting, and prioritization for the space mission area. The Council has five reviews underway or completed in areas such as overhead persistent infrared, satellite communications, space situational awareness, and national security space launches. They are also initiating a study of how DOD can assess the resilience of its space systems. DOD also recently held a forum on resiliency that included participation from senior leaders from several groups within DOD and the Intelligence Community to create a work plan towards resolution of critical gaps in resiliency. Many of the reforms that are being initiated may not be fully proven for some years, because they apply mainly to programs in early acquisition stages, and most DOD space systems are currently either in the production phase or late in the development phase. We have not assessed the impact of actions taken this year, but we have observed that the totality of improvements made in recent years has contributed to better foundations for program execution. While DOD has taken steps to address acquisition problems of the past, significant issues above the program level will still present challenges to even the best run programs. One key oversight issue is fragmented leadership of the space community. We have reported in the past that fragmented leadership and lack of a single authority in overseeing the acquisition of space programs have created challenges for optimally Past studies acquiring, developing, and deploying new space systems.and reviews have found that responsibilities for acquiring space systems are diffused across various DOD organizations, even though many of the larger programs, such as the Global Positioning System and those to acquire imagery and environmental satellites, are integral to the execution of multiple agencies’ missions. This fragmentation is problematic because the lack of coordination has led to delays in fielding systems, and also because no one person or organization is held accountable for balancing governmentwide needs against wants, resolving conflicts and ensuring coordination among the many organizations involved with space systems acquisitions, and ensuring that resources are directed where they are most needed. Though changes to organizations and the creation of the Defense Space Council have helped to improve oversight, our work continues to find that DOD would benefit from increased coordination and a single authority overseeing these programs. A program management challenge that GAO has identified, which stems from a lack of oversight, is that DOD has not optimally aligned the development of its satellites with associated components, including ground control system and user terminal acquisitions. Satellites require ground control systems to receive and process information from the satellites, and user terminals to deliver that satellite’s information to users. All three elements are important for utilizing space-based data, but development of satellites often outpaces the ground control systems and the user terminals. Delays in these ground control systems and user terminals lead to underutilized on-orbit satellite resources, and thus delays in getting the new capabilities to the warfighters or other end- users. In addition, there are limits to satellites’ operational life spans once launched. When satellites are launched before their associated ground and user segments are ready, they use up time in their operational lives without their capabilities being utilized. Synchronization of space system components will be an important issue for DOD in considering disaggregating space architectures, as the potential for larger numbers and novel configurations of satellites and ground systems will likely require the components to be synchronized to allow them to work together in the most effective way possible. As mentioned earlier, DOD is taking steps in response to improvements mandated by the Congress. But it will likely be difficult to better synchronize delivery of satellite components without more focused leadership at a level above the acquisitions’ program offices. For example, budget authority for user terminals, ground systems, and satellites is spread throughout the military services, and no one is in charge of synchronizing all of the system components, making it difficult to optimally line up programs’ deliveries. Fiscal pressures, past development problems, and concerns about the resiliency of satellites have spurred DOD to consider significant changes in the way it acquires and launches national security satellites. Significant fiscal constraints, coupled with growing threats to DOD space systems—including adversary attacks such as anti-satellite weapons and communications jamming, and environmental hazards such as orbital debris—have called into question whether the complex and expensive satellites DOD is fielding and operating are affordable and will meet future needs. For example, a single launch failure, on-orbit anomaly, or adversary attack on a large multi-mission satellite could result in the loss of billions of dollars of investment and a significant loss of capability. Additionally, some satellites, which have taken more than a decade to develop, contain technologies that are already considered obsolete by the time they are launched. To address these challenges, DOD is considering alternative approaches to provide space-based capabilities, particularly for missile warning, protected satellite communications, and environmental monitoring. According to DOD, the primary considerations for studying these approaches and making decisions on the best way forward relate to finding the right balance of affordability, resiliency, and capability. These decisions, to be made over the next 2 to 3 years, have the potential for making sweeping changes to DOD’s space architectures of the future. For example, DOD could decide to build more disaggregated architectures, including dispersing sensors onto separate platforms; using multiple domains, including space, air, and ground, to provide full mission capabilities; hosting payloads on other government or commercial spacecraft; or some combination of these. Our past work has indicated that some of the approaches being considered have the potential to reduce acquisition cost and time on a single program. For instance, we have found that DOD’s initial preference to make fewer large and complex satellites that perform a multitude of missions has stretched technology challenges beyond existing capabilities, and in some cases vastly increased the complexities of related software. In addition, developing extensive new designs and custom-made spacecraft and payloads to meet the needs of multiple users limits DOD’s ability to provide capabilities sooner and contributes to higher costs. Last year, we reported that one potential new approach, hosted payload arrangements in which government instruments are placed on commercial satellites, may provide opportunities for government agencies to save money, especially in terms of launch and operation costs, and gain access to space. As new approaches, such as disaggregation, are considered, the existing management environment could pose barriers to success, including fragmented leadership for space programs, the culture of the DOD space community, fragmentation in satellite control stations, and disconnects between the delivery of satellites and their corresponding user terminals. For instance, disaggregation may well require substantial changes to acquisition processes and requirements setting. But without a central authority to implement these changes, there is likely to be resistance to adopting new ways of doing business, particularly since responsibilities for space acquisitions stretch across the military services and other government agencies. Moreover, under a disaggregated approach, DOD may need to effectively network and integrate a larger collection of satellites—some of which may even belong to commercial providers. We have reported that ground systems generally only receive and process data from the satellites for which they were developed. They generally do not control and operate more than one type of satellite or share their data with other ground systems. To date, however, DOD has had difficulty adopting modern practices and technologies for controlling satellites as well as difficulty in coordinating the delivery of satellites with the user terminals that must be installed on thousands of ships, planes, and ground-based assets. These are conditions that are difficult to change without strong leadership to break down organizational stove-pipes and to introduce technologies or techniques that could enable DOD to better integrate and fuse data from a wider, potentially more disparate, collection of satellites. In light of suggestions that disaggregation could potentially reduce cost and increase survivability, the Senate Committee on Armed Services mandated that we assess the potential benefits and limitations of disaggregating key military space systems, including potential impacts on total costs. To date, we have found that the potential effects of disaggregation are conceptual and not yet quantified. DOD has taken initial steps to assess alternative approaches, but it does not yet have the knowledge it needs to quantify benefits and limitations and determine a course of action. DOD officials we spoke with acknowledge the department has not yet established sufficient knowledge on which to base a decision. While DOD has conducted some studies that assessed alternative approaches to the current programs of record, some within the department do not consider these studies to be conclusive because they were either not conducted with sufficient analytical rigor or did not consider the capabilities, risks, and trades in a holistic manner. For example, according to the Office of the Secretary of Defense’s Office of Cost Assessment and Program Evaluation, a recent Air Force study that assessed future satellite communications architectures contained insufficient data to support the conclusion that one architectural approach was more resilient than others, and the cost estimates it contained did not consider important factors, such as ground control and terminal costs, in calculating the implications of changing architectures. To build consensus in the department, and to conduct a more rigorous analysis of options, DOD is currently in the process of conducting additional studies that will consider future architectures. Included in these studies are Analyses of Alternatives for future missile warning, protected satellite communications, and space based environmental monitoring capabilities.considering are approaches that keep the current system, evolve the current system, and disaggregate the current system into more numerous, but smaller and less complex, satellites. DOD has nearly finished the space-based environmental monitoring study and expects to finish the other two in either this fiscal year or next. Among the range of alternatives these analyses are Moreover, as DOD continues to build knowledge about different acquisition approaches, it will be essential to develop an understanding of key factors for decisions on future approaches that could impact the costs, schedules, and performance of providing mission capabilities. Some considerations for moving to a new or evolved architecture may include the following: Common definitions of key terms, such as resiliency and disaggregation, across all stakeholders, and a common measurement of these terms in order to compare architectural alternatives. The true costs of moving to a new architecture, including transition costs for funding overlapping operations and compatibility between new and legacy systems and non-recurring engineering costs for new- start programs, among others. Potential technical and logistical challenges. For example, with hosted payloads, our past work has found that ensuring compatibility between sensors and host satellites may be difficult because of variable interfaces on different companies’ satellites. In addition, scheduling and funding hosted payload arrangements may be difficult because the timeline for developing sensors may be much longer than that of commercial satellites. Impacts to supporting capabilities, such as ground control and operations and launch availability, and long-standing challenges we have identified regarding how these have been managed. Readiness of the acquisition workforce and industrial base to support a new architecture. Given that DOD is in the early stages of assessing alternatives, our ongoing work is continuing to identify potential benefits and limitations of disaggregation and examine the extent to which these issues are being factored into DOD’s ongoing studies. We look forward to reporting on the results of this analysis this summer. DOD has made some changes to the way it buys launch services from its sole-source provider, and plans to allow other companies to compete with that provider for launch services in the near future. DOD’s Evolved Expendable Launch Vehicle (EELV) program is the primary provider of launch vehicles for U.S. military and intelligence satellites. Since 2006, the United Launch Alliance (ULA) has been the sole-source launch provider for this program, with a record of 50 successful consecutive government missions. From 2006 through 2013, DOD had two types of contracts with ULA through which ULA provided launch services for national security space launches. DOD utilized this dual-contract structure to achieve flexibility in launch schedules and to avoid additional costs associated with frequent launch delays. In the last few years, though the dual contract structure met DOD’s needs for unprecedented mission success and flexible launch capability, predicted costs continued to rise for launch services. In response to these cost predictions, DOD revised its acquisition strategy to allow for a “block buy” of launch vehicles, where DOD would commit to multiple years of launch purchases from ULA, with the goal of stabilizing production and decreasing prices. In addition, and partially in response to GAO recommendations, DOD gathered large amounts of information on ULA’s cost drivers to allow DOD to negotiate significantly lower prices under the contracting structure. In December 2013, DOD signed a contract modification with ULA to purchase 35 launch vehicle booster cores over a 5-year period, 2013-2017, and the associated capability to launch them. According to the Air Force, this contracting strategy saved $4.4 billion over the predicted program cost in the fiscal year 2012 budget. We recently reported on some of the changes included in this new contract from the prior contracts. In addition to this change in the way DOD buys launch vehicles, DOD is also in the process of introducing a method for other launch services companies to compete with ULA for EELV launches. Since 2006, when ULA began as a joint venture between then-competitors Boeing and Lockheed Martin, the EELV program has been managed as a sole source procurement, because there were no other domestic launch companies that could meet the program’s requirements. With the recent development of new domestic launch vehicles that can meet at least some EELV mission requirements, DOD plans to make available for competition up to 14 launches in fiscal years 2015-2017. Any launch company that has been certified by DOD to launch national security space payloads will be able to compete with ULA to launch these missions. DOD is currently finalizing their plan for this competition, including what requirements will be placed on the contractors and how they will compare proposals from the contractors. Based on our discussions with DOD officials, they plan to use a best value approach for this competition, in which price is not the only consideration. DOD will likely consider several factors when comparing proposals for launch services for the 14 booster core competition between ULA and new entrants, including price, mission risk, and satellite vehicle integration risks. DOD could require competitive proposals to be structured in several ways. If DOD requires proposals to contain both fixed-price and cost reimbursement features for launch services and capability, respectively, similar to the way it currently contracts with ULA, there could be benefits to DOD and ULA, but potential burdens to new entrants. For example, DOD is familiar with this approach and has experience negotiating under these terms, and ULA is familiar with DOD’s requirements given ULA’s role as the EELV’s sole launch provider. But use of a cost type contract may negate efficient contractor business practices and cost savings due to government data requirements under this type of approach, and it may give ULA a price advantage because DOD already funds launch capability for ULA. Alternatively, if DOD implements a fixed-price commercial approach to launch proposals with fewer data reporting requirements, DOD could lose insight into contractor cost or pricing, but may receive lower prices from new entrants due to these fewer data reporting requirements. DOD could also require a combination of elements from each of these approaches, or develop new contract requirements for this competition. We examined some of the benefits and challenges of the first two approaches, either of which can DOD facilitate competitive launch contract awards, in a recent report.expects to issue a draft request for proposal for the first of the competitive missions, where the method for evaluating and comparing proposals will be explained, in the spring of 2014. The planned competition for launch services may have helped DOD negotiate the lower prices it achieved in its December 2013 contract modification, and DOD could see further savings if a robust domestic launch market materializes. DOD noted in its 2014 President’s Budget submission for EELV that after the current contract with ULA has ended, it plans to have a full and open competition for national security space launches. Cost savings on launches, as long as they do not come with a reduction in mission successes, would greatly benefit DOD, and allow the department to put funding previously needed for launches into programs in the development phases to ensure they are adequately resourced. In conclusion, DOD has made significant progress in solving past space systems acquisition problems, and is seeing systems begin to launch after years of development struggles. However, systemic problems remain that need to be addressed as DOD considers changes to the way it acquires new systems. This is particularly important if DOD decides to pursue new approaches that could require changes in longstanding processes, practices, and organizational structures. Even if DOD decides not to pursue new approaches, these problems must still be tackled. In addition, challenging budget situations will continue to require tradeoffs and prioritization decisions across programs, though limited funds may also provide the impetus for rethinking architectures. We look forward to working with Congress and DOD in identifying the most effective and efficient ways to sustain and develop space capabilities in this challenging environment. Chairman Udall, Ranking Member Sessions, this completes my prepared statement. I would be happy to respond to any questions you and Members of the Subcommittee may have at this time. For further information about this statement, please contact Cristina Chaplain at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement and related work include Art Gallegos, Assistant Director; Pete Anderson; Virginia Chanley; Erin Cohen; Desiree Cunningham; Brenna Guarneros; Kristine Hassinger; Laura Hook; Rich Horiuchi; Jeff Sanders; and Roxanna Sun. Introducing Competition into National Security Space Launch Acquisitions. GAO-14-259T. (Washington, D.C.: March 5, 2014). The Air Force’s Evolved Expendable Launch Vehicle Competitive Procurement. GAO-14-377R. (Washington, D.C.: March 4, 2014). Global Positioning System: A Comprehensive Assessment of Potential Options and Related Costs is Needed. GAO-13-729. (Washington, D.C.: September 9, 2013). Space: Defense and Civilian Agencies Request Significant Funding for Launch-Related Activities. GAO-13-802R. (Washington, D.C.: September 9, 2013). Missile Defense: Precision Tracking Space System Evaluation of Alternatives, GAO-13-747T. (Washington, D.C.: July 25, 2013). Satellite Control: Long-Term Planning and Adoption of Commercial Practices Could Improve DOD’s Operations. GAO-13-315. (Washington, D.C.: April 18, 2013). Space Acquisitions: DOD Is Overcoming Long-Standing Problems, but Faces Challenges to Ensuring Its Investments Are Optimized. GAO-13-508T. (Washington, D.C.: April 24, 2013). 2013 Annual Report: Actions Needed to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-13-279SP. (Washington, D.C.: April 9, 2013). Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-13-294SP. (Washington, D.C.: March 28, 2013). Launch Services New Entrant Certification Guide. GAO-13-317R. (Washington, D.C.: February 7, 2013). Space Acquisitions: DOD Faces Challenges in Fully Realizing Benefits of Satellite Acquisition Improvements. GAO-12-563T. (Washington, D.C.: March 21, 2012). Evolved Expendable Launch Vehicle: DOD Is Addressing Knowledge Gaps in Its New Acquisition Strategy. GAO-12-822. (Washington, D.C.: July 26, 2012). 2012 Annual Report: Opportunities to Reduce Duplication, Overlap, and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. (Washington, D.C.: February 28, 2012). Space Research: Content and Coordination of Space Science and Technology Strategy Need to Be More Robust. GAO-11-722. (Washington, D.C.: July 1, 2011). Space and Missile Defense Acquisitions: Periodic Assessment Needed to Correct Parts Quality Problems in Major Programs. GAO-11-404. (Washington, D.C.: June 24, 2011). Space Acquisitions: Development and Oversight Challenges in Delivering Improved Space Situational Awareness Capabilities. GAO-11-545. (Washington, D.C.: May 27, 2011). Space Acquisitions: DOD Delivering New Generations of Satellites, but Space System Acquisition Challenges Remain. GAO-11-590T. (Washington, D.C.: May 11, 2011). Evolved Expendable Launch Vehicle: DOD Needs to Ensure New Acquisition Strategy Is Based on Sufficient Information. GAO-11-641. (Washington, D.C.: September 15, 2011). Space Acquisitions: Challenges in Commercializing Technologies Developed under the Small Business Innovation Research Program. GAO-11-21. (Washington, D.C.: November 10, 2010). Global Positioning System: Challenges in Sustaining and Upgrading Capabilities Persist. GAO-10-636. (Washington, D.C.: September 15, 2010). Briefing on Commercial and Department of Defense Space System Requirements and Acquisition Practices. GAO-10-315R. (Washington, D.C.: January 14, 2010). Defense Acquisitions: Challenges in Aligning Space System Components. GAO-10-55. (Washington, D.C.: October 29, 2009). Space Acquisitions: Uncertainties in the Evolved Expendable Launch Vehicle Program Pose Management and Oversight Challenges. GAO-08-1039. (Washington, D.C.: September 26, 2008). Space Acquisitions: DOD Needs to Take More Action to Address Unrealistic Initial Cost Estimates of Space Systems. GAO-07-96. (Washington, D.C.: November 17, 2006). This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Each year, DOD spends billions of dollars to acquire space-based capabilities that support military and other government operations. The majority of DOD's space programs were beset by significant cost and schedule growth problems during their development. Most programs are now in production, however, and acquisition problems are not as widespread and significant as they were several years ago. In prior years, GAO has identified a number of actions DOD is taking to improve management and oversight of space program acquisitions. Facing constrained budgets and concerns about the resiliency of its satellites, DOD is considering potential changes to how it acquires space systems. This testimony focuses on (1) the current status and cost of major DOD space systems acquisitions, (2) recent actions taken to further improve space systems acquisitions, and (3) potential impacts of the direction DOD is taking on upcoming changes to the acquisition of DOD space systems. This testimony is based on previously issued GAO products, ongoing GAO work on disaggregated architectures, interviews with DOD officials, and an analysis of DOD funding estimates from fiscal years 2013 through 2018. Most of the Department of Defense's (DOD) major satellite acquisition programs are in later stages of acquisition, with the initial satellites having been designed, produced, and launched into orbit while additional satellites of the same design are being produced. A few other major space programs, however, have recently experienced setbacks. For example: the Missile Defense Agency's Precision Tracking Space System, which was intended to be a satellite system to track ballistic missiles, has been cancelled due to technical, programmatic and affordability concerns; the Air Force's Space Fence program, which is developing a ground-based radar to track Earth-orbiting objects, continues to experience delays in entering development; and the first launch of the new Global Positioning System satellites has been delayed by 21 months. Congress and DOD continue to take steps they believe will improve oversight and management of space systems acquisitions. In the past year, for example, DOD has updated its major acquisition policy with the goal of improving efficiency and productivity in defense spending. Among other things, the policy change adds a requirement for independent development testing for DOD acquisition programs, which officials believe will provide an independent voice on programs' development. However, DOD still faces significant oversight and management challenges, including (1) leadership of a space community that is comprised of a wide variety of users and stakeholders with diverse interests and (2) alignment of the delivery of satellites with corresponding ground systems and user terminals. For instance, in some cases, gaps in delivery can add up to years, meaning that a satellite is launched but not effectively used for years until ground systems become available. One reason DOD has been unable to align the delivery of space system components is because budgeting authority for the components is spread across the military services. While most DOD major space system acquisitions have overcome development challenges and are currently being produced and launched, past problems involving large, complicated systems, coupled with the recent fiscal climate of reduced funds, has led DOD to consider efforts that could signal significant changes to the way it acquires and conducts space activities. DOD is considering moving away from its current approach in satellite development—building small numbers of large satellites over a decade or more that meet the needs of many missions and users—toward a more disaggregated architecture involving less complex, smaller, and more numerous satellites. GAO has found that DOD does not yet have sufficient information to make decisions on whether to disaggregate, but it is in the process of gathering that information. In addition, in response to predictions of dramatic increases to the price of launching its satellites, coupled with restrained budgets, DOD has made changes to the way it procures launch vehicles, and is moving forward with plans to allow competition for launch services—a significant shift from past ways of doing business. According to the Air Force, other recent steps in launch acquisitions, including gaining significant insight into launch services cost drivers, have enabled it to achieve significant savings. GAO is not making recommendations in this testimony. However, in previous reports, GAO has generally recommended that DOD adopt best practices for developing space systems. DOD has agreed and is in the process of implementing such practices.
VA operates the largest integrated health care system in the United States providing care to nearly 5 million veterans per year. The VA health care system consists of hospitals, ambulatory clinics, nursing homes, residential rehabilitation treatment programs, and readjustment counseling centers. In addition to providing medical care, VA is the largest educator of health care professionals, training more than 28,000 medical residents annually as well as other types of trainees. State licenses are issued by state licensing boards, which generally establish licensing requirements, and licensed practitioners may be licensed in more than one state. “Current and unrestricted licenses” are licenses that are in good standing in the state where they are issued. To keep a license current, practitioners must renew their licenses before they expire and meet renewal requirements established by state licensing boards. Renewal requirements include criteria, such as continuing education, but renewal procedures and requirements vary by state and occupation. When a licensing board discovers a licensee is in violation of licensing requirements or established law, for example, abusing prescription drugs or intentionally or negligently providing poor quality care that results in adverse health effects, it may place restrictions on or revoke a license. Restrictions imposed by a state licensing board can limit or prohibit a practitioner from practicing in that particular state. Some, but not all, state licenses are marked to indicate that the licenses have had restrictions placed on them. Generally, state licensing boards maintain a database of information on restrictions, which employers can obtain at no cost either by accessing the information on a board’s Web site or by contacting the board directly. National certificates are issued by national certifying organizations, which are separate and independent from state licensing boards. These organizations establish professional standards that are national in scope for certain occupations, such as respiratory and occupational therapists. Practitioners who are required to have national certificates to work at VA must have current and unrestricted certificates. Practitioners may renew these credentials periodically by paying a fee and verifying that they obtained required educational credit hours. A national certifying organization can restrict or revoke a certificate for violations of the organization’s professional standards. Like state licensing boards, national certifying organizations maintain databases of information on disciplinary actions taken against practitioners with national certificates, and many can be accessed at no cost. We identified key VA screening requirements and found mixed compliance with these requirements in the four facilities we visited. The key screening requirements are those that are intended to ensure that VA facilities employ health care practitioners who have valid professional credentials and personal backgrounds to deliver safe health care to veterans. None of the four VA facilities complied with all of the screening requirements. In addition, VA does not currently conduct oversight of its facilities to determine if they comply with the key screening requirements. Key VA screening requirements include: verifying the professional credentials of practitioners VA intends to hire; verifying periodically the professional credentials of practitioners currently employed in VA facilities; querying, prior to hiring, the Department of Health and Human Services’ Office of Inspector General’s List of Excluded Individuals and Entities (LEIE) to identify practitioners who have been excluded from participation in all federal health care programs; ensuring that background investigations are requested or completed for practitioners currently employed in VA facilities; ensuring that the Declaration for Federal Employment form (Form 306) is completed by practitioners currently employed in VA facilities; and verifying that the educational institutions listed by a practitioner VA intends to hire are checked against lists of diploma mills that sell fictitious college degrees and other fraudulent professional credentials. To show the variability in the level of compliance among the four VA facilities we visited, we measured their performance in five of the six screening requirements, against a compliance rate of at least 90 percent for each requirement, even though VA policy allows no deviation from these requirements. Table 1 summarizes the compliance results we found for the five requirements among the four VA facilities we visited. For the sixth requirement to match the educational institutions listed by a practitioner against lists of diploma mills, we asked facility officials if they did this check and then asked them to produce the lists of diploma mills they use. All four facilities generally complied with VA’s existing policies for verifying the professional credentials of practitioners currently employed in VA facilities, either by contacting the state licensing boards for practitioners such as physicians or physically inspecting the licenses or national certificates for practitioners such as nurses and respiratory therapists. They also generally ensured that practitioners VA intended to hire had completed the Declaration for Federal Employment form, which requires the practitioner to disclose, among others things, criminal convictions, employment terminations, and delinquencies on federal loans. However, three of the facilities did not follow VA’s policies for verifying the professional credentials of practitioners VA intends to hire, and three did not compare practitioners’ names to LEIE prior to hiring them. Two of the four facilities conducted background investigations on practitioners currently employed in their facilities at least 90 percent of the time, but the other two facilities did not. We also asked officials whether their facilities checked the educational institutions listed by a practitioner VA intended to hire against a list of diploma mills to verify that the practitioner’s degree was not obtained from a fraudulent institution. An official at one of the four facilities told us he consistently performed this check. Officials at the other three facilities stated that they did not perform the check because they did not have lists of diploma mills. In addition to assessing the rate of compliance with the key screening requirements, we found that VA facilities varied in how quickly they took action to deal with background investigations that returned questionable results, such as discrepancies in work or criminal histories. The Office of Personnel Management (OPM) gives a VA facility up to 90 days to take action after the facility receives investigation results with questionable findings. We reviewed the timeliness of actions taken by facility officials from August 1, 2002, through August 23, 2003, at the 4 facilities we visited and 6 additional facilities geographically spread across the VA health care system. We found that officials at 5 of the 10 facilities took action within the 90-day time frame, with the number of days ranging on average from 13 to 68. Officials at 3 facilities exceeded the 90-day time frame on average by 36 to 290 days. One facility took action on its cases prior to OPM closing the investigation, and another facility did not have the information available to report. One of the cases that exceeded the 90-day time frame involved a nursing assistant who was hired to work in a VA nursing home in June 2002. In August 2002, OPM sent the results of its background investigation to the VA facility, reporting that the nursing assistant had been fired from a non- VA nursing home for patient abuse. During our review, we found this case among stacks of OPM results of background investigations that were stored in a clerk’s office on a cart and in piles on the desk and on other workspaces. After we brought this case to the attention of facility officials in December 2003, they reviewed the report and then terminated the nursing assistant, who had worked at the VA facility for more than 1 year, for not disclosing this information on the Declaration for Federal Employment form. VA has not conducted oversight of its facilities’ compliance with the key screening requirements. Instead, VA has relied on OPM to do limited reviews of whether facilities were meeting certain human resources requirements, such as completion of background investigations. These reviews did not include determining whether the facilities were verifying professional credentials. Although VA established the Office of Human Resources Oversight and Effectiveness in January 2003 to conduct such oversight, the office has not conducted any facility compliance evaluations. In addition, VA has not implemented a policy for the human resources program evaluation to be performed by this office and has not provided the resources necessary to support this office. Gaps in VA’s requirements for screening the professional credentials and personal backgrounds of practitioners create vulnerabilities in its screening processes that could place patients at risk by allowing health care practitioners who might harm patients to work in VA facilities. For certain VA practitioners, screening requirements include the verification of all state licenses by contacting the state licensing boards to verify that licenses are current and unrestricted. For example, all state licenses for physicians and dentists are verified by contacting state licensing boards to ensure the licenses are in good standing when VA intends to hire them and periodically during employment. Similarly, all licenses for nurses and pharmacists VA intends to hire are verified by contacting the state licensing boards. However, once hired, periodic screening for nurses and pharmacists simply involves a VA official’s physical inspection of one state license, even if the practitioner has multiple state licenses, creating a gap in the verification process. VA’s requirements allow a practitioner to select the license under which he or she will work in VA, and this license can be from any state, not necessarily the one in which the VA facility is located. A practitioner may have a restricted state license as a result of a disciplinary action, yet show a facility official a license from another state that is unrestricted. VA facility officials informed us that checking one state license was sufficient because state licensing boards share information on disciplinary actions and licenses are marked when restricted. However, according to state licensing board officials, one cannot determine with certainty that a license is valid and unrestricted unless the licensing board is contacted directly. These officials explained that state licensing boards do not always exchange information about disciplinary actions taken against a practitioners and not all states mark licenses that are restricted. Moreover, licenses can be forged, even though state licensing boards have taken steps to minimize this problem. Therefore, physical inspection of a license alone can be misleading. To supplement the screening of the state licenses of physicians and dentists, VA requires facilities to query two national databasesthe National Practitioner Data Bank (NPDB) and the Federation of State Medical Boards (FSMB) databasewhich contain information about disciplinary actions taken against practitioners. Another available national database, the Healthcare Integrity and Protection Data Bank (HIPDB), contains information on professional disciplinary actions and criminal convictions involving all licensed health care practitioners, not just physicians and dentists. VA is currently accessing HIPDB automatically when it queries NPDB for physicians and dentists because the databases share information. However, VA does not require its facilities to do so for all licensed practitioners even though it is authorized to query HIPDB without a fee. VA also requires that practitioners it intends to hire and who must have national certificates to work in VA facilities, such as respiratory therapists, disclose the national certificates and any state licenses they have ever held. However, VA facility officials are not required to check state licenses disclosed by these practitioners and are only required to physically inspect the national certificates. As with physical inspection of state licenses, physical inspection of national certificates alone can be misleading; not all certificates are marked if restricted, and they can be forged. The only way to know with certainty if a national certificate is current and unrestricted is to contact the issuing national certifying organization. In addition to gaps in VA’s verification of professional credentials, VA has not implemented consistent background screening requirements, which would include fingerprint checks, for all practitioners. Although VA requires background investigations for some practitioners currently employed in VA, it does not require these investigations for all types of practitioners. VA requested and received OPM’s permission to exempt certain categories of health care practitioners from background investigations based on VA’s assessment that these types of practitioners do not need to be investigated. Table 2 lists the practitioners that VA exempts from background investigations. OPM began to offer a fingerprint-only checka new screening optionfor use by federal agencies in 2001. Compared to background investigations, which typically take several months to complete, fingerprint-only check results can be obtained within 3 weeks at a cost of less than $25. In commenting on a draft of our report, VA said that it planned to implement fingerprint-only checks for all contract health care practitioners, medical residents, medical consultants, and practitioners who work without direct compensation from VA, as well as certain volunteers. However, VA has not issued guidance to its facilities instructing them to implement fingerprint- only checks on all these practitioners. VA did issue guidance to its facilities to implement fingerprint-only checks for volunteers who have access to patients, patient information, or pharmaceuticals. Implementing fingerprint-only checks for practitioners who are currently exempt from background investigations would detect practitioners with criminal histories. According to the lead VA Office of Inspector General investigator in the Dr. Swango case, if Dr. Swango had undergone a fingerprint check at the VA facility where he trained, VA facility officials would have identified his criminal history and could have taken appropriate action. Additionally, one of the facilities we visited had implemented fingerprint-only checks of medical residents training in the facility and contract health care practitioners. An official at this facility stated that fingerprint-only checks of medical residents and contract practitioners were a necessary component of ensuring the safety of veterans in the facility. FSMB in 1996 recommended that states perform background investigations, including criminal history checks, on medical residents to better protect patients because residents have varying levels of unsupervised patient care. VA’s screening requirements are intended to ensure the safety of veterans by identifying practitioners with restricted or fraudulent credentials, criminal backgrounds, or questionable work histories. However, compliance with the existing key screening requirements was mixed at the four facilities we visited. None of the four facilities complied with all of the key VA screening requirements. However, all four facilities generally complied with VA’s requirement to periodically verify the credentials of practitioners for their continued employment. Although VA created the Office of Human Resources Oversight and Effectiveness in January 2003 expressly to provide oversight of VA’s human resources practices at its facilities, it has not provided resources for this office to carry out its oversight function. Without such oversight, VA cannot provide reasonable assurance that its facilities comply with requirements intended to ensure the safety of veterans receiving health care in VA facilities. Even if VA facilities had complied with all key screening requirements, gaps in VA’s existing screening requirements allow some practitioners access to patients without a thorough screening of their professional credentials and personal backgrounds. For example, although the screening requirements for verifying professional credentials for some occupations, such as physicians, are adequate, VA does not apply the same screening requirements for all occupations with direct patient care access. Specifically, VA does not require that all licenses be verified, or that licenses and national certificates be verified by contacting state licensing boards or national certifying organizations. Similarly, while VA relies on two national databases to identify physicians and dentists who have disciplinary actions taken against them, VA does not require facility officials to query HIPDB. This national database provides information on reports of professional disciplinary actions and criminal convictions that may involve currently employed licensed practitioners and those VA intends to hire. As part of its query of another database, VA accesses HIPDB automatically for physicians and dentists, but practitioners such as nurses, pharmacists, and physical therapists do not have their state licenses checked against this national database. In addition, VA does not require all practitioners with direct patient care access, such as medical residents, to have their fingerprints checked against a criminal history database. These gaps create vulnerabilities that could allow incompetent practitioners or practitioners with the intent to harm patients into VA’s health care system. In light of the gaps we found and mixed compliance with the key screening requirements by VA facilities, we believe effective oversight could reduce the potential risks to the safety of veterans receiving health care in VA facilities. In our report, we recommend that VA take the following four actions: expand the verification requirement that facility officials contact state licensing boards and national certifying organizations to include all state licenses and national certificates held by practitioners VA intends to hire and currently employed practitioners, expand the query of the Healthcare Integrity and Protection Data Bank to include all licensed practitioners that VA intends to hire and periodically query this database for practitioners currently employed in VA, require fingerprint checks for all health care practitioners who were previously exempted from background investigations and who have direct patient care access, and conduct oversight to help ensure that facilities comply with all key screening requirements for practitioners VA intends to hire and practitioners currently employed by VA. Mr. Chairman, this concludes my prepared remarks. I will be pleased to answer any questions you or other Members of the Subcommittee may have. For further information regarding this testimony, please contact Cynthia A. Bascetta at (202) 512-7101. Mary Ann Curran and Marcia Mann also contributed to this statement.
The Department of Veterans Affairs (VA) employs about 190,000 individuals including physicians, nurses, and therapists at its facilities. It supplements these practitioners with contract staff and medical residents. Cases of practitioners causing intentional harm to patients have raised concerns about VA's screening of practitioners' professional credentials and personal backgrounds. This testimony is based on GAO's report VA Health Care: Improved Screening of Practitioners Would Reduce Risk to Veterans, GAO-04-566 (Mar. 31, 2004). GAO was asked to (1) identify and assess the extent to which selected VA facilities comply with existing key VA screening requirements and (2) determine the adequacy of these requirements for its practitioners. GAO identified key VA screening requirements that include verifying state licenses and national certificates; completing background investigations, including fingerprinting to check for criminal histories; and checking national databases for reports of practitioners who have been professionally disciplined or excluded from federal health care programs. GAO reviewed 100 practitioners' personnel files at each of four facilities it visited and found mixed compliance with the existing key VA screening requirements. GAO also found that VA has not conducted oversight of its facilities' compliance with the key screening requirements. GAO found adequate screening requirements for certain practitioners, such as physicians and dentists, for whom all licenses are verified by contacting state licensing boards. However, existing screening requirements for others, such as nurses and respiratory therapists currently employed in VA, are less stringent because they do not require verifying all state licenses and national certificates. Moreover, they require only physical inspection of these credentials rather than contacting licensing boards or certifying organizations. Physical inspection alone can be misleading; not all credentials indicate whether they are restricted, and credentials can be forged. VA also does not require facility officials to query, for other than physicians and dentists, a national database that includes reports of disciplinary actions and criminal convictions involving all licensed practitioners. In addition, many practitioners with direct patient care access, such as medical residents, are not required to undergo background investigations, including fingerprinting to check for criminal histories. This pattern of gaps and mixed compliance with key VA key screening requirements create vulnerabilities to the extent that VA remains unaware of practitioners who could place patients at risk.
The acquisition process at federal agencies generally consists of three phases: (1) acquisition planning; (2) contract award; and (3) contract monitoring. Each phase involves a number of key activities, as shown in figure 1: In the acquisition planning phase, agencies establish their requirements and develop a plan to meet those requirements. Both program and contracting officials participate in acquisition planning activities. During this phase, agencies conduct market research to determine what products or services are available and on what terms. They select a contracting approach best suited to the nature of the acquisition, addressing among other things, the availability of existing contracts, extent of competition required, and the most appropriate contract type, such as cost-reimbursable or fixed-price. In the award phase, agencies solicit bids, quotes, or proposals from prospective vendors, depending on the contracting method selected. In negotiated acquisitions, they evaluate the submissions from vendors under established evaluation criteria in the solicitation and award a contract to the vendor representing the best value to the government, based on a combination of technical and cost factors. Agencies follow a similar process when ordering from the Federal Supply Schedule, where quotes from contractors are evaluated using stated evaluation criteria and orders are awarded to the contractor that would provide the best value and offers the lowest overall cost alternative. In the contract monitoring phase, agencies engage in a range of activities intended to ensure that the contractor delivers according to the terms of the contract. These activities often are described in detail in a contract surveillance plan, sometimes called a quality assurance surveillance plan. For cost-reimbursement contracts, agencies may arrange for an audit of costs incurred by the contractor. These audits may be performed by entities such as the agency inspector general or the Defense Contract Audit Agency (DCAA). NSF spends most of its annual budget of about $7 billion to fund grants to universities and other research entities, but the agency also spent more than $446 million in fiscal year 2011 acquiring goods and services in support of its mission. The largest of these acquisitions involved contracts for logistics support of scientific missions in the Arctic and Antarctica, as well as ocean-drilling projects in various locations. For these types of large-scale projects, NSF uses the negotiated contracting procedures of Part 15 of the FAR. NSF uses negotiated contracting methods for about 66 percent of its contract spending, as shown in figure 2. For another 32 percent of its contract spending, NSF uses a variety of more streamlined contracting methods allowed under the FAR. These include placing orders under Federal Supply Schedule contracts awarded by the General Services Administration or other pre-existing contracts. Placing orders under existing contracts is often a more simplified approach than awarding a new contract. The remaining 1 percent or so of NSF contract spending is through various other methods, such as interagency agreements with the U.S. Navy for deep sea research vessel certification. The Division of Acquisition and Cooperative Support (DACS) at NSF is responsible for the solicitation, negotiation, award, and administration of the agency’s contracts for NSF’s research facilities and major programs. DACS oversees NSF procurement systems, contracts policy, processes and guidance. This Division is under the Office of Budget, Finance, and Award Management which reports to the Office of the Director. The Office of Inspector General provides independent oversight of the agency’s programs and operations, including contracts. The NSF-OIG is responsible for promoting efficiency and effectiveness in agency programs and for preventing and detecting fraud, waste, and abuse. By statute, the NSF-OIG is under the general supervision of the National Science Board and reports to the Board and Congress. Much of NSF’s contracting activity is for recurring needs, such as logistics support for its facilities in the polar regions, data collection, or surveys. For example, the National Survey of Recent College Graduates began in 1973 and continues today. In our prior work on acquisition planning practices, we found that documenting decisions, particularly when there is frequent staff turnover, is key to providing insight for subsequent contracts. Specifically, we found that documenting cost estimates is particularly important to help ensure the information is available when planning for follow-on contracts. Incorporating lessons learned from prior acquisitions can help further refine requirements and strategies when planning for future acquisitions. NSF officials must decide on a contract pricing arrangement for every contract or order. The major categories of pricing arrangements NSF uses are fixed-price, time-and-materials, and cost-reimbursement. Under a fixed-price contract, the government generally pays a firm price and may also pay an award or incentive fee related to performance. In a time- and-materials contract or order, the government pays a set amount for every hour of service the contractor provides, plus the cost of any materials used. Because the number of hours to be provided is dependent on a number of factors, this type of contract requires an enhanced level of government oversight. When using a cost- reimbursement contract, the government agrees to reimburse all the allowable costs incurred by the contractor as prescribed in the contract. These types of contracts can be risky because the government agrees to pay for costs incurred regardless of the outcome achieved. Cost-type contracts that exceed certain dollar thresholds generally are subject to the cost allocation rules of the government’s Cost Accounting Standards (CAS), and in these cases the contractor generally is required to disclose its cost accounting practices in a CAS Disclosure Statement. We previously reported on the use of cost-reimbursement contracts at several agencies, including NSF, finding that agencies frequently did not document why they selected this type of contract. Financial statement audits performed by an independent accounting firm on behalf of the NSF-OIG for fiscal years 2009 and 2010 identified significant deficiencies related to the use and monitoring of cost- reimbursement contracts at NSF. Specifically, the audits found that NSF did not ensure the adequacy of contractor accounting systems prior to award or the validity of costs incurred on the contract. In 2011, however, the same firm concluded that the concern had been addressed through the adoption of new policies and procedures. While we were conducting our audit work, NSF was in the process of conducting a self-assessment of its acquisition function in accordance with the Office of Management and Budget (OMB) Circular A-123. The agency also retained a consulting firm to review its self-assessment. In July 2012, the firm issued a report summarizing its findings. We did not assess the methodology, findings, or conclusions of either the NSF self-assessment or the consulting firm’s review. In October 2012, NSF updated its contracting manual to incorporate a number of changes. For example, NSF reorganized the manual to align it with the FAR and added additional guidance to address the deficiencies identified in the financial audits. All of the contract activities in our review were subject to prior versions of the contracting manual. The NSF contract files we reviewed reflected the use of selected key acquisition planning practices to varying degrees, but the agency has not provided guidance on the time needed to complete early planning phase activities. Allowing sufficient time to plan procurements may facilitate an increased use of lower risk contracting vehicles by providing time for the contracting officer to consider including more fixed-priced elements. Our observations of the use of some of the key practices for acquisition planning activities are summarized in table 1, and explained in more detail below. The acquisitions we reviewed all involved some degree of acquisition planning, but the time spent planning and the content of planning documents varied. Planning for the negotiated acquisitions ranged from a few months to more than 6 years, while many of the streamlined acquisitions in our sample had more abbreviated planning periods. Contracting and program officials responsible for one program office told us they often copy planning documents from predecessor orders to compensate for abbreviated planning periods. This practice, however, does not allow for incorporation of new guidance or changing contract requirements. In addition, some of the individual contract acquisition plans for the earlier contracts in our sample did not include details on how the agency planned to evaluate the proposals from competing vendors. Documenting a decision regarding the plan for proposal evaluation is an important component of the acquisition planning phase. Contracting guidance at NSF does not identify the range of time needed to conduct acquisition planning activities for the types of acquisitions methods it employs. Currently, the guidance states that the process of acquisition planning should begin as soon as a program need is identified and it is determined that the need must be met through the use of resources from outside the government. The guidance does not provide any detail, however, on the expected range of time needed to conduct planning activities in the earliest stages of an acquisition when key documents are prepared, such as the statement of work and a cost estimate. Acquisition planning usually occurs in three phases, and while NSF has established expected time frames for the latter stages of acquisition planning, the agency has not established such expectations for the earliest planning phase. Figure 3 depicts what we found at NSF. Allowing sufficient time to plan procurements may provide agencies a better opportunity to clearly define contract requirements, outline source selection procedures, conduct market research to support competition, estimate costs, and consider opportunities for increased use of lower-risk contracting vehicles containing more fixed-priced elements. Conversely, the lack of sufficient time for planning may have adverse effects, such as unplanned delays. For example, NSF had to extend one streamlined order in our review on a non-competitive basis for more than a year and a half in order to complete planning tasks for the follow-on order. The contracting officer used the additional planning time to conduct the analysis needed to incorporate more fixed-priced elements into the new order. Planning for the earlier order did not include documentation of a price history analysis, which, according to contracting officials, may have helped expedite the follow-on planning and was likely due to short planning time frames for the earlier order. In another case, the delayed award of one of the orders in our review caused a compressed period for data collection for a report with firm deadlines. The schedule risk from these delays could lead to higher overall costs. Further, officials from two program offices told us that they would benefit from knowing an expected time range to complete early planning activities. For example, in the absence of guidance on the time needed to complete early planning activities, program and contracting officials responsible for NSF’s largest contract told us they had difficulty convincing their colleagues of the appropriate time to initiate contract planning. They added that this acquisition required a number of changes before a follow-on contract could be awarded—some based on updates to the FAR and some based on internal decisions, including the use of a different source selection strategy. Market research is a key element in the acquisition planning phase that provides insight into available sources for the acquisition and may provide information on estimated costs. We found evidence of market research in each of the acquisition plans we reviewed, though the link between the research conducted and the impact on the acquisition strategy was not always clear. For example, the acquisition plan for one streamlined acquisition noted concerns about the lack of offerors for past solicitations. The acquisition plan stated that NSF would use the Federal Supply Schedule and release the request for quotations to six potential offerors, but it did not address how market research impacted this decision. By contrast, NSF engaged in extensive planning for its Integrated Ocean- Drilling Program, including requirements development and market research to identify potential sources to support its mission. According to officials, this planning, which occurred over about a 5-year period, consisted of soliciting interest from more than 30 international institutions using various techniques such as market surveys and sources sought notices. NSF used this multi-year planning period to set up the funding and organizational infrastructure requirements of this complex international program. All of the files we reviewed showed that during the planning phase agency officials had addressed how the contract would be priced. However, the planning documentation for the cost-reimbursable acquisitions in our review did not consistently include assessments of the additional risk and burden these high-risk contracts place on the agency or an assessment of the potential for firmer pricing in future acquisitions. Knowing the risk of using a cost-reimbursable contract and identifying opportunities to use a less risky contract type after experience provides a basis for firmer pricing is a sound practice identified by our prior work, by the Department of Defense, and, more recently, in federal regulation. Despite the risk associated with cost-type contracts, NSF contracting officials did not document their acknowledgment of this risk for an early contract for the ocean drilling program or whether they would attempt to minimize the future use of a cost-type contracts. Further, in a prior report we noted that NSF’s procurements of data collection and analysis services for mandated surveys did not consider pricing history and whether there was a basis to transition to firmer pricing. According to NSF officials, when re-awarding these types of survey procurements, staff will make an effort to identify tasks to convert to firmer pricing. In fact, a contracting officer responsible for the survey-related orders in our sample told us he has been conducting analysis to determine what tasks could be transitioned to a fixed-price contract type rather than a time-and-materials contract type. He stressed that some tasks are less suitable for fixed- pricing due to the unknowns and “what ifs” inherent in the work, but his goal is to incorporate fixed-pricing into 70 to 80 percent of each survey order. We identified examples of this transition to firm fixed-price elements in some of NSF’s more recently awarded streamlined acquisitions. Contract documentation for negotiated and streamlined acquisitions showed that NSF generally followed key practices in the award phase. Table 2 summarizes our findings based on the contracts and orders we reviewed. Most of the contracts in our sample included price reasonableness determinations, as outlined in both federal regulation and NSF guidance current at the time of our review. For most of the streamlined acquisitions we reviewed, NSF documented reasonable price determinations, including an analysis of the contractor’s proposed labor hours and the level of effort. In one case, contracting staff worked with an offeror to obtain lower labor rates that were more in line with the government cost estimate. These actions decreased the cost of the order by approximately 8 percent ($1.2 million). In recent years, NSF has taken steps to address deficiencies related to accounting system and disclosure statement reviews identified in its fiscal year 2009 financial statement audits. Specifically, NSF clarified its CAS disclosure statement and accounting system review procedures to better align with sound practices identified by the NSF-OIG and in federal regulation. Contract file documentation indicates that NSF has improved in this area, with most of the negotiated contracts we reviewed having documentation of more recent accounting system and CAS disclosure statement reviews, and the most recent contract having documentation of pre-award audits of all contractors in the competitive range. One of the earlier contracts did not have pre-award audits on file or an accounting system review prior to award. NSF officials told us that they did not think this requirement applied. In another earlier case, the contracting officer waived the requirement for a CAS disclosure statement adequacy determination prior to award with the expectation that the determination would be made shortly after award. However, NSF did not have documentation of the final disclosure statement adequacy determination. NSF updated its guidance and took steps to incorporate sound practices related to contract monitoring, but the agency has not made arrangements for audits of some of the larger contracts we sampled. Our findings are summarized in table 3. Most of the contracts we reviewed included documentation of surveillance plans outlining how NSF would monitor contractor performance and costs, although one of the streamlined acquisitions did not have the surveillance documents called for in the acquisition plan. Further, we found evidence that at least some monitoring activities occurred for all the procurements we reviewed, though not always as specified in the monitoring plans or using deliverables described in the contract or order. For example, the acquisition plan for a large, information technology (IT) order states that the contractor shall provide “daily, weekly, and monthly progress reports” as well as an IT Management Plan and other ad hoc reports as required, with similar requirements reflected in the order. The contracting officer for this order was not aware of any daily progress reports for this order, and added that the monitoring process for these types of acquisitions depends on the quality of the contractor, noting that for some contracts with few performance issues, the monitoring is less rigorous. In another case, the contracting officer noted that despite the statement of work calling for a Quality Assurance Plan, such a plan would be too restrictive for an IT support contract due to the frequent changes in IT systems. Our prior reports state that without consistent cost surveillance, such as through incurred cost audits, an agency may be exposed to the unnecessary risk of overpaying the contractor. Further, NSF-OIG’s fiscal year 2009 financial statement audits recommended that NSF obtain incurred cost submissions and audits for its largest cost-reimbursable contracts, depending on materiality and risk, to assure the validity of costs billed to NSF. In response, NSF updated its guidance on incurred cost audits and took the necessary steps to obtain incurred cost audits for its largest contract. Around the same time, in August 2009, NSF-OIG and the NSF Office of the Director signed a memorandum of understanding (MOU) that provides procedures to ensure appropriate coordination between the NSF-OIG and NSF for the performance and funding of contract audits. The MOU indicates that the NSF-OIG will provide, within its resources, appropriated funds necessary to perform contract audits selected for its annual audit plan. The NSF-OIG solicits recommendations from NSF per the MOU and prioritizes its annual audit plan based on this input, its own needs, and a variety of risk factors. The MOU identifies the following factors the NSF- OIG uses to prioritize contract audits: type of contract, materiality, whether NSF is the cognizant agency responsible for contractor oversight, known prior audit concerns, contract administration at other federal agencies, and whether NSF expects to continue to have a relationship with the contractor. For audits that NSF determines necessary that are not in the NSF-OIG audit plan, the MOU states that “NSF will obtain and fund the services of an outside auditor.” The contracts branch officials told us that their first option is to ask the NSF- OIG to obtain an audit, and if the NSF-OIG does not complete the contract audit, the branch tries to obtain alternative funding. NSF officials told us, however, that alternate funding requires approval at senior management levels, and contracting staff continue relying on the NSF- OIG as the primary means for obtaining contract audits. The NSF Director and NSF-OIG identified the need for incurred costs audits of an ocean drilling contract in our sample. Despite the MOU, the agency has not made arrangements for these audits of the contract. Officials stated for earlier years of this contract, the Contracts Branch identified and provided funds for the contracting officer to initiate audits for this contract through DCAA. According to officials, an audit of a prime subcontractor for this contract resulted in $1.5 million in recovered funds. But at the time of our review, despite agreement on the importance of additional audits, the findings from the prior year’s audits, and NSF’s continued relationship with the contractor, the agency had yet to make arrangements to plan and fund incurred cost audits for more recent fiscal years for this contract, according to officials. Similarly, despite the contracting officer requesting incurred cost audits for another major contract in our review, the audit did not meet the NSF-OIG priorities. According to officials, NSF has not conducted or planned for audits on this contract. In addition, audits for another major contract we reviewed are not scheduled to be completed until fiscal year 2015, which is about two years after the contract expires. In a recent report, we pointed out that timely closing of contracts, including completing any necessary incurred cost audits, can help the government limit its financial risk and possibly recover improper payments. Sound acquisition planning, including cost estimation and identification of the most cost-effective contract type, is important to establishing a strong foundation for successful outcomes for the millions of dollars NSF spends annually on acquisitions. Without sufficient planning time frames to develop acquisition plans that align with sound acquisition practices NSF may have a limited ability to develop a strong foundation for its acquisitions. How long the early acquisition planning activities should take is not covered in existing NSF guidance and will vary based on the complexity of the acquisition. However, without a clear understanding of the time frames needed for the early acquisition planning process, program officials may not know when to start planning or how long the planning will take, potentially increasing the likelihood of poorly prepared documents and contract delays. Better insights into when acquisition planning should begin would help ensure sufficient time to carry out the important acquisition planning activities that are designed to facilitate more successful outcomes. When an acquisition involves substantial uncertainties and the agency deems a cost-type contract as the most appropriate vehicle, contract and program staff need to provide additional oversight to protect the government’s interests. NSF has taken steps to address NSF-OIG recommendations to increase contract oversight. NSF has a management responsibility to ensure that adequate resources are available to enable contracting officers to determine that costs billed by contractors are allowable, through incurred cost audits or similar assessments. The process in place to ensure the necessary audits occur requires coordination between the NSF-OIG and the NSF Office of the Director; however, the process has not worked for some of the contracts we reviewed. Further, the Contracts Branch continues to place a strong reliance on the NSF-OIG to provide the resources to obtain the audits. Without a process to ensure audits are conducted in cases when NSF- OIG resources are not available, NSF exposes itself to unnecessary risk and cannot assure the validity of costs billed. We recommend that the Director of NSF take the following two actions: To help ensure good acquisition outcomes through comprehensive acquisition planning, direct DACS to supplement existing guidance on the time frames for acquisition planning to include a focus on the early stages. Consistent with the terms of the existing MOU with the Office of the Inspector General, take steps to arrange, and fund as necessary, timely audits of major contracts. We provided a draft of this report to NSF for review and comment. In written comments, NSF agreed with our recommendations. NSF also provided technical comments, which we incorporated as appropriate. NSF comments are reprinted in appendix II. We are sending a copy of this report to the Director of the National Science Foundation. In addition, the report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or WoodsW@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. As requested by the Subcommittee on Commerce, Justice, Science, and Related Agencies, House Committee on Appropriations, we reviewed the National Science Foundation’s (NSF) contracting practices. Specifically, we assessed the extent to which the NSF incorporates key contracting practices in the three major phases of the contracting process: (a) acquisition planning, (b) contract award, and (c) post-award contract monitoring. Within each contracting phase, we focused our work on selected elements: Acquisition planning. We focused on the completeness and review of written acquisition plans, market research, contract type determinations, and time frames for planning. We selected these elements because they are critical to the successful planning of a contract and, in one case, had been identified in the past by the NSF Office of the Inspector General (NSF-OIG) as a potential concern. Contract award. We focused on cost and price analyses, cost accounting system reviews and pre-award audits, and Cost Accounting Standards (CAS) Statement reviews. We selected these elements because they were identified by the NSF-OIG as deficiencies in the past and are essential to determining that the contractor has the ability to complete the contract cost requirements. Contract monitoring. We focused on the development of monitoring or surveillance plans, monitoring activities, and incurred cost audits. These activities were previously identified by the NSF-OIG as deficiencies and are key to determining if the contractor is performing as expected and within allowable costs. To determine key practices in each of these areas, we relied on prior reports and findings from the GAO, NSF-OIG, and other agencies. Below is the list of GAO reports we relied on: GAO, Standards for Internal Control in the Federal Government, GAO/AIMD-00-21.3.1 (Washington, D.C.: November 1999); GAO, Contract Management: Trends and Challenges in Acquiring Services, GAO-01-753T (Washington, D.C. May 22, 2001); GAO, Defense Contracting: Improved Insight and Controls needed Over DOD’s Time-and-Materials Contracts, GAO-07-273 (Washington, D.C.: June 29, 2007); GAO, Contract Management: Extent of Federal Spending under Cost- Reimbursement Contracts Unclear and Key Controls Not Always Used, GAO-09-921(Washington, D.C.: Sept. 30, 2009); and GAO, Acquisition Planning: Opportunities to Build Strong Foundations for Better Services Contracts, GAO-11-672 (Washington, D.C.: Aug. 9, 2011). We also reviewed internal NSF guidance and the Federal Acquisition Regulation (FAR) for additional key practices. To determine the extent to which NSF’s contracting practices incorporate key practices and address prior NSF-OIG recommendations, we reviewed a nongeneralizable sample of 11 contracts and orders with funding obligations over $3 million in fiscal year 2011, the latest year for which data were available when we began our work. We used a risk-based approach to select our sample to ensure it included NSF acquisitions with the highest obligation dollar amount. The 11 contracts and orders selected for review represent 70 percent of total contract obligations in fiscal year 2011 and reflect a mix of program offices, a range of obligation amounts, and a variety of contract types, such as fixed-price and cost- reimbursement. We selected four contracts for which NSF used the negotiation process set forth in Part 15 of the Federal Acquisition Regulation and seven orders on existing contracts for which NSF used streamlined procedures described in other parts of the FAR. The four negotiated acquisitions in our sample are cost-reimbursement contracts and represent about 56 percent of NSF’s total fiscal year 2011 contract obligations and about 80 percent of the obligations in our sample. The seven streamlined acquisitions represent about 14 percent of NSF’s fiscal year 2011 contract obligations and 20 percent of the obligations in our sample. One of the seven streamlined orders is a hybrid contract type using fixed-price and time-and-materials (T&M) elements; one is a cost- reimbursable order; and the other five are T&M orders. Although the 11 contracts were active during the time of our review , some of the selected contracts were awarded more than 7 years ago—before NSF updated its contracting manual to provide more procedural guidance— and some more recently. We reviewed the files for the selected contracts and used practices identified in the FAR, NSF internal guidance, and prior GAO reports to assess NSF’s use of key practices and procedures for the acquisition planning, award, and contract monitoring phases. In addition to contract file review, we met with contract and program officials to confirm our understanding of information in the contract files and of NSF’s practices and procedures as evidenced by the contract files. We also reviewed and considered additional documentations provided by the program and contract officials that were not maintained in the contract files. To assess progress NSF made in response to prior NSF-OIG findings, we reviewed prior NSF-OIG recommendations and corrective action plans. We met with NSF-OIG officials to better understand their recommendations related to our review and used this information to provide assessments of progress made in response to these findings. NSF was in the process of a full acquisition system assessment when we initiated our review. While we were completing our audit work, NSF issued a review of its acquisition function in July 2012. While we met with the internal controls officials involved in this review to understand their process, we did not assess the NSF review as part of this review. We conducted this performance audit from February 2012 to March 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Penny Berrier, Assistant Director, Caryn Kuebler, Margaret Childs, Danielle Greene, Jeffrey Hartnett, Julia Kennon, Jean McSween, Emily Owens, Ken Patton, Erin Schoening, Roxanna Sun, and Alyssa Weir also made key contributions to this report.
The NSF spends more than $400 million of its $7 billion annual budget acquiring goods and services in support of its mission to promote science and engineering. Much of this spending involves exploration activities in remote locations throughout the world, such as the Arctic and Antarctic. GAO examined the extent to which NSF uses key contracting practices in three phases of the acquisition process: (a) acquisition planning, (b) contract award, and (c) post-award contract monitoring. GAO selected and reviewed a nongeneralizable sample of 11 contracts or orders with at least $3 million in funding obligations for fiscal year 2011, which accounted for about 70 percent of NSF's total contract obligations for that year. Although all 11 contracts and orders received funding during fiscal year 2011, some were awarded more than 7 years ago. Some were awarded more recently. We reviewed each of the 11 contracts to determine the extent to which they reflected the use of key contracting practices based on the Federal Acquisition Regulation, our prior work, and NSF-OIG findings. GAO also reviewed NSF contracting policies and met with NSF contracting and program officials. For the contracts GAO reviewed, the National Science Foundation (NSF) generally used key contracting practices in each of the three phases of the acquisition process, but the agency needs additional guidance on early acquisition planning as well as arrangements for contract audits. The contracts GAO reviewed all involved some degree of acquisition planning, but NSF's guidance does not address appropriate time frames for early planning activities. Without such guidance, NSF contract and program officials said they could not convince their colleagues of the need to initiate early planning activities. Delays in these activities can lead to further delays later. For example, NSF had to extend one order on a non-competitive basis for more than a year to complete planning tasks for the follow-on order. In another case, the delayed award of an order compressed the data collection period for a report with firm deadlines, which could lead to higher overall costs. Further, having sufficient time for early planning may facilitate an increased use of lower risk contracting approaches. Contract documentation showed that NSF generally followed key practices in the award phase. An NSF corrective action plan, in response to NSF's Office of Inspector General's (NSF-OIG) 2009 financial statement audits, clarifies the agency's procedures for reviewing contractors' accounting practices and financial disclosure statements to better align with key practices. Contract file documentation shows NSF improved in this area, with most of the negotiated contracts having documentation of accounting system reviews. Further, NSF generally documents price reasonableness determinations. NSF updated its guidance and took steps to incorporate key contract monitoring practices. NSF-OIG's 2009 financial statement audits recommended that NSF obtain incurred cost submissions and audits for its largest cost-reimbursable contracts to ensure the validity of costs billed to NSF. Around the same time, the NSF-OIG and the NSF Office of the Director signed a memorandum of understanding (MOU) that provides a process for arranging for contract audits. Audits for one of the ocean drilling contracts completed in 2012 resulted in $1.5 million in recovered funds. The NSF Director and NSF-OIG have both identified additional audits of this contract as a top priority. However, despite the terms of the MOU, and the agreement between NSF and the NSF-OIG on the need for further audits, arrangements have not been made to conduct additional audits of this contract for more recent fiscal years, according to officials. Similarly, despite requests from the contracting officer, NSF has not made arrangements for incurred cost audits for another large contract GAO reviewed. GAO recommends that the Director of NSF (1) supplement existing guidance on acquisition planning to address the time needed for the early stages of the process, and (2) arrange for audits to be performed on major contracts, consistent with the terms of the memorandum of understanding with NSF-OIG. NSF agreed with the recommendations.
African American children were more likely to be placed in foster care than White or Hispanic children in 2006, and at each decision point in the child welfare process the disproportionality of African American children grows. Nationally, although African American children made up less than 15 percent of the overall child population in the 2000 Census, they represented 26 percent of the children who entered foster care during fiscal year 2006 and 32 percent of the children remaining in foster care at the end of that year (see fig. 1). There are various options for placing children in temporary and permanent homes through the child welfare system. Temporary options include foster care with relatives or nonrelatives—whether licensed or unlicensed—and group residential settings. According to HHS, approximately one-fourth of the children in out-of-home care are living with relatives, and this proportion is higher for Hispanic and African American families. For permanent placements, children can be reunified with their parents, or if reunification is not considered possible, children can be adopted or live with a legal guardian. Although both adoption and guardianship are considered permanent placement options under federal law, an important difference is that adoption entails terminating parental rights, while guardianship does not. Another difference is that some adoptions may be subsidized with federal funds. Federal funds account for approximately half of states’ total reported spending for child welfare services, with the rest of funding coming from states and localities. In fiscal year 2004, total federal spending on child welfare was estimated to be $11.7 billion based on analysis of data from more than 40 states. Titles IV-E and IV-B of the Social Security Act are the principal sources of federal funds dedicated for child welfare activities. Title IV-E supports payments to foster families, subsidies for families who provide adoptive homes to children who states identify as having special needs that make placement difficult, and related administrative costs on behalf of children who meet federal eligibility criteria. Title IV-E payments for foster care maintenance are open-ended entitlements. Title IV-B authorizes funds to states for broad child welfare purposes, including child protection, family preservation, and adoption services; these funds are appropriated annually. Federal block grants, such as the Temporary Assistance for Needy Families (TANF) and the Social Services Block Grant (SSBG), provide additional sources of funds that states can use for child welfare purposes. States have discretion to provide direct social services for various populations, including child welfare families, the elderly, and people with disabilities. In 1994, the Congress authorized the use of demonstration waivers to encourage innovative and effective child welfare practices. These waivers, typically authorized for 5 years, allowed states to use Title IV-E funds to provide services and supports other than foster care maintenance payments. For example, four states had completed demonstrations that involved subsidized guardianships, and, as of May 2007, seven states had active guardianship demonstrations and one state had not yet implemented its guardianship demonstration. Demonstration waivers must remain cost-neutral to the federal government, and they must undergo rigorous program evaluation to determine their effectiveness. A complex set of interrelated factors influence the disproportionate number of African American children who enter foster care, as well as their longer lengths of stay. Major factors affecting children’s entry into foster care included African American families’ higher rates of poverty, difficulties in accessing support services, and racial bias or cultural misunderstanding among child welfare decision makers, as well as families’ distrust of the child welfare system. Factors often cited as affecting African American children’s length of stay in foster care included the lack of appropriate adoptive homes for children, parents’ lack of access to support services needed for reunification with their children, and a greater use of kinship care among African American families. (See fig. 2.) In our survey, 33 of the 48 states from which we received responses reported that high rates of poverty in African American communities and issues related to living in poverty may increase the proportion of African American children entering foster care compared to that of children of other races and ethnicities. Across the nation, African American families were nearly four times more likely to live in poverty than White families, according to U.S. Census data. Since foster care programs primarily serve children from low-income families, this could account for some of the disproportionate number of African American children in the foster care system. In addition, child welfare directors in 25 states reported that the greater number of African American single-parent households contributed to African American children’s entry into foster care. According to the most recent National Incidence Study, children of single parents, who are also more likely than married couples to be poor, had a 77 to 87 percent greater risk of harm than children from two-parent families. Across the nation, 34 percent of African American family households with children under 18 years of age were headed by single females compared to 9 percent for Whites and 19 percent for Hispanics, according to U.S. Census data. Moreover, families living in impoverished neighborhoods often do not have access to the kinds of supports and services that can prevent problems in the home from leading to abuse or neglect, according to states we surveyed and other research. Such supports and services include affordable and adequate housing; substance abuse treatment; access to family services such as parenting skills workshops and counseling; and adequate legal representation. Also, there is some evidence that African American families, in particular, are not offered the same amount of support services when they are brought to the attention of the child welfare system. Coupled with African American parents’ greater distrust of the child welfare system, racial bias or cultural misunderstanding among decision makers also emerged in our survey as major factors contributing to the disproportionate number of African American children entering foster care. According to state child welfare officials and some researchers we interviewed, African American families’ distrust of the child welfare system stems from their perception that the system is unresponsive to their needs and racially biased against them. This perception can shape the families’ dynamics in their initial contacts with mandated reporters, caseworkers, and judges, which can increase the risk the child will be removed from the home. In our survey, state child welfare directors also reported that they considered racial bias or cultural misunderstanding on the part of those reporting abuse or neglect---such as teachers, medical professionals, or police officers, as well as among caseworkers---as factors in the disproportionate representation of African American children entering foster care. In support of this view, some studies have found that medical professionals are more likely to report low-income or minority children to child protective services. Although research on racial bias or race as a predictor for entry into foster care is not always consistent, a recent review of the current research concluded that race is an important factor that affects the decision to place children into foster care. Among factors cited as affecting African American children’s longer lengths of stay in foster care, officials from 29 states cited an insufficient number of appropriate adoptive homes as a key factor. African American children constituted nearly half of the children legally available for adoption in 2004, and they waited significantly longer than other children for an adoptive placement. Factors that make finding adoptive families for African American children challenging include the difficulty many states have in recruiting adoptive families of the same race and ethnicity as the children waiting for adoption and the unwillingness of some families to adopt a child of another race. In addition, states we surveyed reported that African American children waiting to be adopted were older, and prospective adoptive parents are more inclined to adopt younger children. (See fig. 3.) Additionally, the belief that African American children are more likely to be diagnosed as having medical and other special needs, which may contribute to their longer lengths of stay in foster care, was reported by state officials. In fact, African American children in foster care in 2004 were only slightly more likely to have been diagnosed as having medical conditions or other disabilities (28 percent) than White children in foster care (26 percent), according to HHS data. However, 23 percent of African American children who were adopted out of foster care had a medical condition or disability, compared to 31 percent of White children in the same category. Some of the same factors that states view as contributing to African American children’s entry also contribute to their difficulties in exiting foster care and being reunified with their families. In our survey, nearly half of the states considered the lack of affordable housing, distrust of the child welfare system, and lack of substance abuse treatment as factors contributing to African American children’s longer lengths of stay. The lack of such supports and other services in many poor African American neighborhoods contributes to children’s longer stays in foster care because services can influence a parent’s ability to reunify with their child in a timely manner, according to our survey, interviews, and research. States also reported that the use of kinship care was a factor contributing to longer lengths of stay in foster care for African American children. African American children are more likely than White and Asian children to enter into the care of relatives, which is associated with longer lengths of stay. Relatives may be unwilling to adopt the child because it would require termination of their relative’s parental rights or because they might lose needed financial support they receive as foster parents. However, despite the longer lengths of stay, child welfare researchers and officials we interviewed consider these placements to be positive options for African American children because they are less stressful to the child and maintain familial ties. Researchers and child welfare administrators we interviewed stressed that no single strategy could fully address disproportionality in foster care, partly because so many interrelated factors contribute to it. According to our survey, the strategies that states implemented tended to focus on addressing racial and cultural bias in decision making, families’ problems in accessing support services, and agencies’ challenges in finding permanent homes so that children can exit foster care more quickly. In addition, data collection and analysis were considered essential for identifying problems and devising strategies to address them, but states reported needing additional assistance in this area. To help mitigate bias and cultural misunderstanding among decision makers, states reported implementing a range of strategies, such as including family members in case planning; providing training to strengthen caseworkers’ competency in working with families from various cultures; reaching out to ensure that public officials are not inappropriately referring families for abuse and neglect through mandated reporting; and implementing the use of certain tools to help caseworkers make more systematic decisions regarding the level of a child’s risk. (See fig. 4.) According to an evaluation in Texas, for example, for African American families who participated in case planning that included family group decision making, 32 percent of the children returned home—more than twice as many as in families who received traditional services. To improve families’ access to services, states reported collaborating with neighborhood-based support organizations, establishing interagency agreements to improve access to these services, and implementing an alternative approach to the assessment process that emphasizes helping families obtain needed supports and services, instead of removing children from their families. For example, in Los Angeles County, child welfare officials went door to door in minority neighborhoods to find service providers beyond those with whom they historically contracted. This collaboration helped build trust between the community and the child welfare agency and increased families’ use of the services provided. For African American children who cannot ultimately be reunified with their parents, states also reported devising strategies to increase the number of permanent homes available to them. To increase the options for African American children, 46 states reported making diligent searches for fathers and other paternal kin who can care for these children—not a routine practice until recently. Additionally, a federal law passed in 1994 and amended in 1996 require states to diligently recruit potential foster and adoptive families that reflect the ethnic and racial diversity of children in the state who need foster and adoptive homes. Likely in response to these laws, states have adopted various strategies to recruit greater numbers of African American adoptive parents, such as contracting with faith-based organizations and convening adoption support teams. However, despite these efforts, the number of African American children adopted by African American parents has not increased in recent years. In addition, HHS’s 2001 to 2004 review found that only 21 of 52 states were sufficiently recruiting minority families, and one report found that the recruitment of minority families was one of the greatest challenges for nearly all states. Using subsidized guardianship as an alternative to adoption may hold particular promise for reducing disproportionality, and more than half of the states surveyed reported using this strategy. African American children are more likely than White children to be placed with relatives for foster care, which is generally a longer-term placement, and these relative caregivers are also more likely than nonrelative foster parents to be low- income. They may be unwilling to adopt because they may find it difficult financially to forego foster care payments or because adoption entails terminating the parental rights of their kin. However, subsidized guardianship programs provide financial support for foster parents (often relatives) who agree to become legally responsible for children but are unable or willing to adopt. When Illinois and California implemented two of the largest of such programs, they subsequently saw an increase in permanent placements for all children. After instituting their subsidized guardianship programs, more than 40 percent of children who were in long-term relative foster care in both states found permanency. In Illinois, this decrease also coincided with a reduction in disproportionate numbers of African American children in foster care. In addition to these types of strategies, child welfare administrators and researchers told us that data collection, analysis, and dissemination are needed to inform attempts to address disproportionality. These data can include not only disproportionality rates but also information that identifies the extent to which disproportionality occurs among different age groups, at different stages in the child welfare process, and in different locations. For example, a California researcher used state data to show that African American infants enter foster care at a much higher rate than infants of other races or ethnicities and that this disproportionality grows as children get older because African American children are also less likely to exit foster care. Such data analyses help states and localities devise strategies to address the issue and can also be useful for building consensus among community leaders and policymakers for action. However, some state and local agencies have limited capacity to do this. In responding to our survey, 25 states reported that receiving technical assistance from HHS in calculating disproportionality rates and tracking it over time would be useful. California state child welfare officials told us that without the aid of a university researcher, they would not have the ability to help counties that lack the capacity to collect and analyze their data. Despite the importance of data analysis, 18 states reported that they were not regularly analyzing or using data in their efforts to address disproportionality. HHS has made technical assistance and information on disproportionality available to states at conferences and through various HHS Web sites. In addition, the agency is compiling an inventory of tools and best practices for addressing disproportionality. Despite these efforts, states report that they need further information and technical assistance to strengthen their current efforts in addressing disproportionality. Accordingly, in our July 2007 report, we recommended that HHS take certain actions to further assist states in understanding and addressing the nature and extent of racial disproportionality in their child welfare systems. In its comments, HHS noted that our recommendation was consistent with its efforts to provide technical assistance to states for addressing disproportionality, but the department did not address the specific actions we recommended. We continue to believe that it is important for HHS to take these actions to help states address this complex issue. While states viewed some federal policies as helpful for reducing the proportion of African American children in foster care, they also expressed concerns regarding policies that limit the use of federal funds to provide preventive services and support legal guardianship arrangements. As an alternative to adoption, states considered subsidized guardianship as particularly helpful in enabling African American children to exit foster care but noted that while they can use federal child welfare funds to pay subsidies to adoptive parents, they cannot do so for guardians. At least half the states we surveyed noted that the structure of federal child welfare funding may contribute to disproportionality by favoring foster care placements over services to prevent the removal of children from their homes in the first place. Of particular concern to 28 states in our survey were the caps on funding for preventive and family support services under Title IV-B, and 25 states expressed concern about their inability to use foster care funds under Title IV-E for purposes other than making payments to foster care families. A recent GAO report similarly found that preventive and family support services were the services most in need of greater federal, state, or local resources. According to California and Minnesota officials, because the majority of federal child welfare funds are used for foster care payments instead of preventive services, federal funding policies did not align with states’ efforts to reduce the number of children entering foster care by serving at-risk children safely in their homes. However, states do have the freedom to use other federal funds, particularly TANF block grants, to provide preventive and supportive services to families, and 23 states reported that the ability to use these funds contributes to a reduction in the proportion of African American children in foster care. States face competing priorities for the use of their TANF block grant funds, and not all states use them for child welfare activities. Once children are removed, states reported that federal policies promoting adoption were generally helpful; however, states’ views were mixed on certain requirements specifically intended to eliminate race-related barriers to adoption. Policies that promote adoption of African American children were generally viewed as helpful, such as allowing states to classify African American children as having “special needs,” which allows them to provide subsidies to adoptive parents, according to our survey results. However, views of other requirements were mixed. Although 22 states reported that the federal policies requiring states to diligently recruit ethnically and racially diverse adoptive families would help reduce disproportionality, 9 states reported the federal requirements had no effect, and 15 states reported that they were unable to tell. States continue to face challenges in recruiting adoptive families---such as a shortage of willing and qualified parents, especially for older African American children, or a lack of resources for recruiting initiatives---and more than half of states are not meeting HHS performance goals in this area. Over the last 5 years, African American children and Native American children have consistently experienced lower rates of adoption than children of other races and ethnicities, and since 2000, adoption rates have reached a plateau, according to HHS data and other research. As an alternative to adoption, many child welfare officials and researchers we interviewed considered subsidizing legal guardianship a particularly important way to help African American children exit foster care. However, there are no federal subsidies for guardianship similar to those available for adoption, which constrains states’ ability to place children in these arrangements. Seven states have a federal demonstration waiver, which allows them to use Title IV-E funds for subsidized guardianship. All states did so in a cost-neutral manner, as required by the waivers. In California and Illinois, subsidizing these legal guardianships has been found to reduce the number of children in foster care, including African American children. In addition, guardianship and adoption both have been found to provide comparable levels of stability for children and show similar outcomes in terms of emotional and physical health, according to an evaluation of Illinois’s guardianship program. Because of the challenges states face finding adoptive homes for many African American children and because legal guardianship may offer a more suitable alternative for families who want to permanently care for related children without necessarily adopting them, we recommended, in our 2007 draft report, that HHS pursue specific measures to allow adoption assistance payments to be used for subsidizing legal guardianship. In its comments, HHS disagreed with our recommendation, stating that its proposal for restructuring child welfare funding, known as the Child Welfare Program Option, would give states the option to do this. However, HHS has presented this option in its budget proposal each year since 2004, but no legislation has been offered to date to authorize it. Moreover, even if enacted, it is unknown how many states would choose to implement this funding structure. Because the viability of HHS’s proposal is uncertain, in our final July 2007 report, we suggested that Congress consider amending current law to allow adoption assistance payments to be used for legal guardianship. To date, the House of Representatives has passed a bill with a provision to allow states to use federal funds to subsidize legal guardianship for relatives, and the Senate has introduced a bill with a similar provision. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions you or other Members of the Subcommittee may have. For further information about this testimony, please contact me at (202) 512-7215 or brownke@gao.gov. Individuals making key contributions to this testimony include Kim Siegal, Theresa Lo, Deborah A. Signer, Gale Harris, and Charlie Willson. Disconnected Youth: Federal Action Could Address Some of the Challenges Faced by Local Programs That Reconnect Youth to Education and Employment. GAO-08-313. Washington, D.C.: February 28, 2008. African American Children in Foster Care: Additional HHS Assistance Needed to Help States Reduce the Proportion in Care. GAO-07-816. Washington, D.C.: July 11, 2007. Child Welfare: Improving Social Service program, Training, and Technical Assistance Information Would Help Address Long-standing Service-level and Workforce Challenges. GAO-07-75. Washington, D.C.: October 6, 2006. Child and Family Services Reviews: Better Use of Data and Improved Guidance Could Enhance HHS’s Oversight of State Performance. GAO- 04-333. Washington, D.C.: April 20, 2004. HHS Actions Could Improve Coordination of Services and Monitoring of States’ Independent Living Programs. GAO-05-25. Washington, D.C.: November 18, 2004. Child Welfare: Enhanced Federal Oversight of Title IV-B Could Provide States Additional Information to Improve Services. GAO-03-956. Washington, D.C.: September 12, 2003. Child Welfare and Juvenile Justice: Federal Agencies Could Play a Stronger Role in Helping States Reduce the Number of Children Placed Solely to Obtain Mental Health Service. GAO-03-397. Washington, D.C.: April 23, 2003 (reissued on August 11, 2003). Foster Care: Recent Legislation Helps States Focus on Finding Permanent Homes for Children, but Long-Standing Barriers Remain. GAO-02-585. Washington, D.C.: June 28, 2002. Foster Care: Kinship Care Quality and Permanency Issues. GAO-99-32. Washington, D.C.: May 6, 1999. Foster Care Implementation of the Multiethnic Placement Act Poses Difficult Challenges. GAO-98-204. Washington, D.C.: September 14, 1998. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
A significantly greater proportion of African American children are in foster care than children of other races and ethnicities relative to their share of the general population. Given this situation, GAO was asked to analyze the (1) major factors influencing their proportion in foster care, (2) strategies states and localities have implemented that appear promising, and (3) ways in which federal policies may have influenced the proportion of African American children in foster care. This testimony is based on a GAO report issued in July 2007 (GAO-07-816), which included a nationwide survey; a review of research and policies; state site visits; analyses of child welfare data; and interviews with researchers, HHS officials, and other experts. It includes updates where possible. According to our survey results, key factors contributing to the proportion of African American children in foster care included a higher rate of poverty, challenges in accessing support services, racial bias and distrust, and difficulties in finding appropriate adoptive homes. Families living in poverty have greater difficulty accessing housing, mental health, and other support services needed to keep families stable and children safely at home. Bias or cultural misunderstandings and distrust between child welfare decision makers and the families they serve also contribute to children's removal from their homes into foster care. African American children also stay in foster care longer because of difficulties in recruiting adoptive parents, the lack of services for parents trying to reunify with their children, and a greater reliance on relatives to provide foster care who may be unwilling to terminate the parental rights of the child's parent--as required in adoption--or who need the financial subsidy they receive while the child is in foster care. Most states we surveyed reported using various strategies intended to address these issues, such as building community supports, providing cultural competency training for caseworkers, and broadening the search for relatives to care for children. Researchers and officials also stressed the importance of analyzing data to address the proportion of African American children in care in order to better understand the issue and devise strategies to address it. HHS provides information and technical assistance, but states reported that they had limited capacity to analyze their own data and formulate strategies to address disproportionality. According to our survey, states viewed some federal policies, such as those that promote adoption, as helpful for reducing the proportion of African American children in foster care. However, they also expressed concerns regarding policies that limit the use of federal funds to provide preventive services and support legal guardianship arrangements. As an alternative to adoption, subsidized guardianship is considered particularly promising for helping African American children exit from foster care.
The Agricultural Credit Act of 1987 (the 1987 Act) authorized Farmer Mac to promote the development of a secondary market for agricultural real estate and rural housing loans. As a GSE, Farmer Mac is a federally chartered, privately owned and operated corporation that operates as a special purpose corporation. Farmer Mac is also an independent entity within FCS, which is another GSE. When Congress passed the 1987 Act, some observers stated that a Farmer Mac-sponsored nationwide secondary market would develop quickly and be widely used. Others stated that Farmer Mac would serve more as a safety valve for the agricultural sector if FCS encountered difficulties. A secondary market is a financial market for buying and selling loans, either individually or in the form of securities backed by cash flows from groups or “pools” of loans. By authorizing Farmer Mac to promote the development of an agricultural secondary market, Congress intended to (1) increase the availability of long-term financing to creditworthy farmers and ranchers at stable interest rates and (2) provide greater liquidity in agricultural financing. Ideally, such a market would provide agricultural lenders with access to national capital markets, which, by returning cash to such lenders in exchange for the mortgages, would generate additional funds for them to lend and enhance their ability to manage credit and interest-rate risks. Under Farmer Mac’s originating statute, the 1987 Act, it was only to certify certain agricultural lenders and other financial institutions to act as third- party “poolers” that is, financial institutions that would buy qualified loans from other lenders or “originators,” assemble or “pool” the loans, and issue and sell securities that are backed by these pools to investors. Farmer Mac guarantees the timely payment of principal and interest to investors who purchase these mortgage-backed securities. The original statute did not permit Farmer Mac to buy and hold agricultural loans. The 1987 Act also required either originators/poolers to maintain a cash reserve to cover at least the first 10 percent of losses arising from defaults on the pools of loans backing Farmer Mac-guaranteed securities or holders of subordinated participation interests (SPI) to absorb these losses before Farmer Mac’s guarantee could be exercised. The purpose of the reserve requirement was to minimize risks to Farmer Mac and the federal government by requiring originators, poolers, and investors to hold most of the loan’s credit risk. Risk-based capital requirements for banks and FCS institutions required them to hold capital against the full amount of the sold loan, not just the 10 percent retained by the lender. Regulators of primary market lenders (e.g., banks and FCS institutions) viewed the retained SPI as the source of substantially all of the loan’s credit risk and, therefore, obtaining Farmer Mac’s guarantee did not reduce the amount of capital the lender was required to hold. As a result, the incentive for banks and FCS institutions major agricultural mortgage lenders Farmer Mac-guaranteed loan pools was reduced. Further, the 1987 Act required certain diversification standards to be met—that is, each pool was to be made up of loans secured by properties from different geographic locations that produce different agricultural commodities. The necessity to operate through third-party poolers and establish the mandatory cash reserve or SPI increased the complexity and expense of secondary market transactions to both Farmer Mac and lenders. Under its original operating structure, Farmer Mac was unable to achieve a profit, and its prospects for survival were dim. Eight years after its creation, Farmer Mac faced possible financial failure and was ineffective in creating a successful agricultural secondary market.Farmer Mac requested and Congress granted new statutory authorities in the 1996 Act to improve Farmer Mac’s ability to fulfill its statutory mission. Among other things, the revised charter allowed Farmer Mac to (1) purchase agricultural mortgage loans directly from lenders and serve as a pooler, (2) eliminate the mandatory 10-percent minimum cash reserve and SPI required with each loan pool and also eliminate the loan diversification standards, (3) have its securities accorded full “agency status” in the financial markets, and (4) relax and delay the implementation of regulatory capital standards for Farmer Mac. The first three revisions made Farmer Mac’s operating structure essentially the same as Freddie Mac’s and Fannie Mae’s the GSE facilitators of the secondary markets for residential mortgage loans. Primary market lenders, secondary market entities, and investors in securities backed by cash flows from loan pools face credit, interest-rate, prepayment, management, and business risks. Farmer Mac faces credit that is, the possibility of financial loss resulting from default by borrowers on farming assets that have lost value and/or other parties’ failing to meet their obligations. This risk occurs when Farmer Mac holds mortgages in portfolio and when it guarantees principal and interest payments to investors in AMBS it issues. Farmer Mac’s interest-rate riskcan result from the possibility of an increase in interest rates in the national economy that is not matched by an increase in interest rates paid by borrowers to Farmer Mac for loans that are held in portfolio by Farmer Mac. Farmer Mac’s prepayment risk can result from the possibility of a decline in interest rates, which can cause borrowers to prepay their mortgages. Farmer Mac faces management risk from the possibility of financial loss resulting from a management mistake that can threaten the company’s viability. Finally, Farmer Mac faces business risk from the possibility of financial loss due to conditions within the agricultural sector that affect loan performance. The risk characteristics of agricultural mortgage loans are different from those of conventional single-family residential mortgage loans.Agricultural mortgages are commercial loans that fund a wide variety of agricultural activities (e.g., poultry farms or orange groves), while single- family mortgages fund a fairly homogeneous asset. As a result, in the event of loan foreclosure, farm properties can be harder to appraise and more difficult to liquidate than a single-family residence. In addition, the financial and business skills of farm operators can affect the value of their collateral since their income comes largely from the mortgaged property, rather than from independent employment or investment income. As a result, assessing the risks of cash flows from agricultural loan pools can be more difficult than such an assessment for single-family residential mortgages. To some extent, agricultural mortgage loans are more like multifamily loans than single-family loans because multifamily loans are commercial loans in which income is derived largely from rental of the mortgaged property. Farmer Mac strives to fulfill its statutory mission mainly by purchasing agricultural mortgages from lenders. Lenders who participate in the primary market for such agricultural mortgages include federally insured depository institutions, insurance companies, and FCS institutions. Once purchased by Farmer Mac, the mortgages can be held directly in portfolio or pooled to back newly issued AMBS. Farmer Mac, in turn, can hold some AMBS in portfolio and sell some AMBS to investors in national financial markets. About $1.1 billion in total AMBS were outstanding as of year-end 1998; slightly more than half of the value was held by investors other than Farmer Mac. Farmer Mac guarantees timely payment of principal and interest to investors in its AMBS. Farmer Mac conducts its operations through two broadly defined programs. The Farmer Mac I Program consists of agricultural and rural housing mortgage loans that do not contain federally provided primary mortgage insurance. The Farmer Mac II Program consists of agricultural mortgage loans containing primary mortgage insurance provided by the U.S. Department of Agriculture (USDA). Farmer Mac was authorized in the Food, Agriculture, Conservation, and Trade Act of 1990 (the 1990 Act) to facilitate the creation of a secondary market for USDA-guaranteed agricultural loans. Under Farmer Mac II, Farmer Mac can purchase or have others purchase the guaranteed portions of USDA loans, assemble them into pools, and hold them in portfolio or sell them as securities to investors. At year-end 1998, Farmer Mac held $306.8 million of Farmer Mac II AMBS in portfolio and other investors held $30.1 million. We focused our attention on the secondary market in agricultural mortgages under the Farmer Mac I Program because it is the primary program through which Farmer Mac conducts its secondary market activity. However, we included Farmer Mac II Program activity in our analysis of Farmer Mac’s future viability. To address our objectives overall, we reviewed relevant literature, congressional testimony, Securities and Exchange Commission public filings, and relevant Internet World Wide Web sites. We also held numerous discussions with Farmer Mac executives and interviewed representatives of the American Bankers Association, Independent Bankers Association of America, and the Farm Credit Council. To gain a better understanding of the agricultural mortgage market and its prospects for future growth, we met with financial and agricultural economists from USDA’s Economic Research Service. Additionally, to obtain a regulatory perspective on Farmer Mac activities, we met with officials from the Farm Credit Administration (FCA), the Director of FCA’s Office of Secondary Market Oversight, and the Department of the Treasury’s Director of GSE Policy. In our analysis of the agricultural mortgage market, we did not undertake detailed analyses of competing FCS or FHLBank System products. We also did not analyze the USDA loan guarantee programs. To determine Farmer Mac’s risk management practices and exposure to each type of risk, we (1) obtained Farmer Mac’s written and oral responses to questions on interest-rate, prepayment, credit, business, and management risks; (2) reviewed corporate policies and standards, including Farmer Mac’s Seller/Servicer Guide (Farmer Mac guide), which specifies lender requirements for participation in Farmer Mac programs; (3) obtained data on Farmer Mac’s current financial condition and operating results, such as delinquency rates and profit margins; (4) reviewed methodologies for determining capital adequacy, pricing, sensitivity to interest-rate changes, sensitivity to economic stress, and management information systems; and (5) examined copies of external auditors’ reports and management letters. We also reviewed FCA’s March 1998 regulatory examination report and discussed the report with FCA officials. To help determine the potential market benefits from a government- sponsored secondary market for agricultural loans, we conducted a mail survey of approved Farmer Mac sellers and nonparticipants. Using two mail questionnaires, we conducted the survey in late 1998 and early 1999 and telephoned selected nonrespondents in 1999. We obtained information on the background of the financial institutions, questioned their knowledge of and participation in Farmer Mac programs, and sought their views on Farmer Mac and the secondary agricultural mortgage market. Survey participants were chosen from lists provided by Farmer Mac. The 263 institutions companies, and FCS institutions on Farmer Mac’s approved sellers list commercial banks, thrifts, mortgage bankers, trust (as of Oct. 1998) and the 331 financial institutions of various sizes with over $100 million in assets on Farmer Mac’s nonparticipants list (as of Oct. 1998) were sent the respective surveys. To the list of 331 nonparticipants, we added 3 large insurance companies that are agricultural mortgage lenders that were not on Farmer Mac’s list. Our survey results are not generalizable to the universe of agricultural lenders, but the results are generalizable to the unique groups identified by Farmer Mac and us. We did not examine the impact of Farmer Mac on agricultural mortgage interest rates or the availability of agricultural mortgage credit. See appendixes I and II for a more detailed description of our survey methodology and survey results, respectively. We constructed financial scenarios using various assumptions to help illustrate Farmer Mac’s ability to sustain mission viability as described in appendix III. We defined mission viability as the ability of Farmer Mac to generate a profit from its core business of operating a secondary market in agricultural mortgages and to provide a reasonable rate of return to its equity investors. Our purpose was to construct scenarios to illustrate conditions that could affect Farmer Mac’s future viability. These scenarios do not represent forecasts of the future. We were limited by our reliance on publicly available data in presenting our scenarios. had it approved by Farmer Mac. A nonparticipant is a financial institution that has not been approved by Farmer Mac to participate in Farmer Mac’s programs. authorities of Farmer Mac as well as the legislative history of the 1971 Act and Farmer Mac’s mission. We also discussed the legal opinions with officials from these three entities. To provide a perspective on secondary market servicing guidelines and procedures, in addition to reviewing the Farmer Mac guide, we reviewed two Fannie Mae guides for servicing single-family and multifamily residential mortgages. We did not independently verify the information supplied by Farmer Mac or others. We conducted our work in Washington, D.C., between July 1998 and April 1999 in accordance with generally accepted government auditing standards. We obtained written comments on a draft of this report from the President and Chief Executive Officer of Farmer Mac. His written comments are discussed at the end of this letter and are reprinted in appendix IV. Farmer Mac has developed new programs and products in an attempt to provide an alternative source of funding for agricultural lenders. Farmer Mac also has used its new charter authorities to streamline the process for buying loans, including some standardization, and developed a program to market its products to agricultural lenders. The market’s reception of Farmer Mac’s products thus far has been limited, and Farmer Mac’s loan purchase volumes have remained small in relation to the primary market. For example, the share of agricultural mortgages sold to Farmer Mac has shown some growth since the 1996 restructuring, but its market share represented about 1.2 percent of the agricultural mortgage debt outstanding as of the third-quarter of 1998. In addition to its loan purchase programs, Farmer Mac, in 1998, initiated its AgVantage Program in which Farmer Mac in effect provides loans to agricultural lenders with the lenders’ using agricultural mortgages as collateral. Activity under this program has been of relatively small volume to date. In an attempt to facilitate an efficient secondary market, Farmer Mac has streamlined the process for buying loans and standardized some aspects of a secondary market transaction, including underwriting guidelines, but it believes that standardized loan documents, such as those used in the secondary market for residential mortgages, would be cost prohibitive. To mitigate its exposure to risks, Farmer Mac uses risk management techniques to help it conduct secondary market activities in a safe and sound manner. In its effort to stimulate greater secondary market activity, since 1996 Farmer Mac has developed several new programs and loan products that were designed to increase participation by traditional (e.g., rural banks) as well as nontraditional (e.g., mortgage banks) agricultural mortgage lenders. Through workshops and various marketing initiatives, Farmer Mac has increased the number and types of sellers approved to sell loans to Farmer Mac, established new programs, and expanded its product line. Farmer Mac expected these initiatives to enhance market reception to Farmer Mac, thereby increasing the volume of agricultural mortgages sold in the secondary market. Secondary market activity is likely to be greater when the secondary market creates products that help lenders and investors manage various risks at low cost. For example, interest-rate risk associated with long-term, fixed-rate loans can often be managed at lower cost by secondary market investors (e.g., AMBS investors, including Farmer Mac) with access to long-term bond financing relative to primary market financial institutions that rely on deposit bases. Secondary market entities, such as Farmer Mac, can also use their nationwide operations to obtain geographic diversification of their loan purchases to help manage credit risk. While standards for underwriting, appraisal, and loan servicing are used to help manage credit risk, secondary market entities have relatively less ability than lenders to rely on borrower relationships to assess credit risk. Thus, from the pool of loans meeting their underwriting standards, secondary market entities increase their risk of purchasing loans with high credit risk from less creditworthy borrowers. By establishing a training program, Farmer Mac sought to educate and attract lenders by increasing their interest in and improving their understanding of Farmer Mac and the secondary market for agricultural mortgages. In 1997, over 800 lenders attended the more than 20 seller/servicer workshops that Farmer Mac conducted across the nation to inform lenders of Farmer Mac’s new authorities and programs and of the benefits of participating in the agricultural secondary market. Marketing initiatives have resulted in Farmer Mac’s approving several nationally known, large commercial banks and mortgage banks as approved sellers from which it could buy loans, which has increased the potential for lender diversity. The initiatives also expanded the number of outlets through which Farmer Mac products can be marketed to customers. Lenders approved to submit loans for possible sale to the Farmer Mac I Program totaled 286 as of December 1998. At its peak under the old charter, Farmer Mac had nine approved sellers (at that time known as poolers). The mechanism that Farmer Mac established to purchase mortgages directly from lenders for cash and provide loans to agricultural lenders (i.e., the AgVantage Program) is called its Cash Window Program. This program grew out of the 1996 legislation that granted Farmer Mac greater flexibility in its business dealings with agricultural lenders. The 1996 Act authorized Farmer Mac to purchase loans directly from originating lenders. Before this act, lenders could only participate in the secondary market by selling agricultural real estate loans to qualified Farmer Mac poolers. Additionally, the Cash Window Program was designed to (1) provide lenders with new product terms and competitive interest rates for agricultural real estate loans and (2) provide a responsive process for better servicing the credit needs of lenders’ borrowers. The Cash Window Program began in July 1996 and by December 1998, $732 million in loans were sold to Farmer Mac. In late 1997, Farmer Mac introduced its Part-Time Farm Program, which covered farms with substantial off-farm income. This program offers a fixed-rate, 30-year home mortgage product for farms on at least five acres of land or farms generating at least $5,000 in gross farm sales from agricultural crops or livestock. The value of the home must represent at least 30 percent of the total appraised value of the property. Farmer Mac sought to facilitate the use of this program by making the origination and servicing requirements simple and using familiar documents and procedures. For example, standard conforming residential secondary market origination forms are used in this program. In February 1998, as an expansion of the Cash Window Program, Farmer Mac established the AgVantage Program, which allows Farmer Mac to fund eligible lenders by providing cash advances. The primary difference between the AgVantage Program and existing Farmer Mac programs is that, in this program, the lender does not sell its loans to Farmer Mac but instead issues a bond backed by eligible loans and other collateral. To facilitate access to this program, Farmer Mac has provided standard documentation, including a standard form for the bond. Farmer Mac guarantees the bond and purchases it. This transaction allows the lender to keep the loans and associated credit risk, while increasing its debt and liquidity positions. Farmer Mac is to receive low-risk income from the bond and guarantee fees. This program was designed to meet the demand for long-term loans by being attractive to lenders with excess collateral, but inadequate liquidity. Since the program’s February 1998 inception through December 1998, 16 AgVantage bond transactions had been consummated with 10 AgVantage issuers, resulting in Farmer Mac guarantees for $143.6 million of AgVantage bonds. Due to the short-term nature of the obligations that had been issued, only $10.8 million of the $143.6 million remained outstanding at year-end 1998. In addition to the Cash Window, Part-Time Farm, and AgVantage Programs, Farmer Mac can purchase loans through a Swap Program (introduced in early 1997). A swap is a transaction in which lenders exchange one or more eligible loans for Farmer Mac-guaranteed securities, rather than cash. Unlike Cash Window transactions, which generally involve loans with Farmer Mac-specified terms, Farmer Mac is to negotiate these swap transactions with the lender and is to acquire loans with payment, maturity, and interest-rate characteristics that Farmer Mac would not purchase through its Cash Window Program. In January 1999, Farmer Mac reported that it had committed to enter into a $408 million long-term, standby purchase commitment that operates similarly to a swap in agricultural mortgages. As of January 1999, Farmer Mac had consummated four Farmer Mac I swap transactions totaling approximately $493 million (includes the previously mentioned $408 million transaction). In 1997 and early 1998, complementary to its existing product lines, Farmer Mac developed new loan products that included a refined 1-year adjustable rate mortgage (ARM) and a new 3-year ARM with flexible borrower prepayment terms. These two loan products can be converted to a long-term, fixed-rate loan after a certain time period has elapsed. Farmer Mac also developed a 10-year, fixed-rate mortgage. See table 1 for a list of Farmer Mac’s programs and their descriptions and features, in addition to the various loan products offered. Increased authority granted to Farmer Mac by the 1996 Act, which allowed its operating structure to parallel that of Fannie Mae and Freddie Mac, has provided it with flexibility to develop programs. However, this expanded operational authority does not mean that program success will be guaranteed or that expected outcomes will be achieved. Although Farmer Mac has expanded its seller base and provided lenders with streamlined procedures, including some standardization, to access the secondary market, Farmer Mac acknowledges that it cannot be certain whether the new products it offers will generate a sufficient volume of loans that would allow Farmer Mac to continuously function as a profitable corporation. One key factor that could hinder lenders’ use of Farmer Mac programs would be their lack of knowledge about these programs. Our survey results showed that familiarity with Farmer Mac programs varied widely, with the majority of the nonparticipating respondents being unfamiliar 52 percent and 62 percent were unfamiliar with Farmer Macs I and II Cash Window Programs, respectively. With other programs, familiarity was even lower 70 percent of the respondents were unfamiliar with the AgVantage Program, and over 87 percent were unfamiliar with the Swap Programs under Farmer Macs I and II. Even among approved sellers, familiarity with some Farmer Mac programs was low. For example, 31 percent of the respondents were unfamiliar with the AgVantage Program; 75 percent and 78 percent were unfamiliar with the Swap Programs under Farmer Macs I and II, respectively. The AgVantage Program’s volume has been relatively small to date, but nevertheless, Farmer Mac officials consider the program to be beneficial because it encourages lenders to do business with Farmer Mac and it is considered to be competitive with advances offered by the FHLBank System. Furthermore, Farmer Mac officials believe that the AgVantage Program is more advantageous to lenders than FHLBank System advances considering that AgVantage loans can be sold to Farmer Mac at a later time without the need for any additional paperwork requirements. On the basis of our survey, 23 percent of the approved sellers surveyed said they are “likely” or “very likely” to participate in the AgVantage Program in the next 3 years; among nonparticipants, this proportion was 12 percent. The extent to which these lender inclinations result in increased secondary market activity for Farmer Mac has yet to be determined. Standardization, such as the development of standardized loan documents, can help streamline the process for buying loans. Thus, standardization has the potential to help lower transaction costs and increase the efficiency of the secondary market. Farmer Mac had standardized some aspects of the secondary market transaction by requiring its agricultural mortgage lenders to make representations and warranties that the loans they are selling meet Farmer Mac underwriting standards. Farmer Mac officials told us that standardized loan documents have not been developed because of the prohibitive cost associated with standardization given that state laws governing agricultural mortgage loans and agricultural lending practices vary from state-to-state. Farmer Mac had established lender requirements in its Farmer Mac guide that provides various levels of standardization for different lender practices. Although its statute is silent on loan document standardization, Farmer Mac has, to some extent, taken steps to standardize loan documentation for the agricultural secondary market. Standardization of loan origination documents is common in the secondary market for residential mortgages. In the secondary market for residential mortgages, Fannie Mae and Freddie Mac have increased efficiency through greater standardization of mortgage products and processes. Standardized documents can reduce the cost and effort necessary to evaluate the quality of asset pools because inspection or review of each lending arrangement can be replaced with verification that adherence to the predetermined industrywide standards for loan origination has been maintained. Farmer Mac officials stated that early in Farmer Mac’s existence, it sought to standardize loan documents but was not able to achieve a level of standardization approaching that achieved by Fannie Mae and Freddie Mac. These Farmer Mac officials attributed this inability to differences in agricultural real estate state laws that differ greatly from state-to-state. Residential real estate state laws are more uniform. These officials also stated that, with the diversity of agricultural lender practices and the heterogeneous characteristics of agricultural loans, developing nationwide documents would be difficult and costly. Farmer Mac generally allows agricultural lenders the option of using Farmer Mac forms, their own forms, or an off-the-shelf commercial loan package. According to Farmer Mac’s Chief Executive Officer and General Counsel, the vendor of the off-the-shelf package is to guarantee Farmer Mac that each loan package meets the legal requirements of all states where the loans were originated. These officials also stated that Farmer Mac forms do not meet the legal requirements of all states. Farmer Mac provides loan origination forms on a disk for use by lenders, but it only requires that participants use the loan summary and environmental survey forms. As long as other forms used by the lenders present information in substantially the same format as Farmer Mac forms, the use of Farmer Mac’s forms is not required. These Farmer Mac officials noted that small lenders, rather than large lenders, are more apt to use the off-the-shelf commercial product because small lenders often lack in-house legal departments. The Chief Executive Officer and General Counsel of Farmer Mac stated that regardless of whether the loan documents used are the lender’s own or those of the commercial vendor, the documents must include legally enforceable standard Farmer Mac representations, warranties, and provisions to ensure that the loans conform to Farmer Mac’s loan underwriting and appraisal standards. These officials also stated that since the initial use of representations and warranties in 1991, they have encountered no problems enforcing the terms of the loan agreements. The Farmer Mac guide states that Farmer Mac is to verify, via examination of loan files, that the documents submitted by lenders conform to Farmer Mac’s underwriting standards and other loan origination requirements. Also, as stated in the Farmer Mac guide, Farmer Mac’s verification of loan files does not relieve lenders of their obligations under the representations and warranties provided to Farmer Mac. For its part-time farm loans loans Farmer Mac uses the standard, conforming residential mortgage loan application and documentation forms used in the residential mortgage secondary market. Also, to improve the consistency of information included in the closing/settlement statements for all loans sold to Farmer Mac, lenders are required to use the standard Department of Housing and Urban Development closing/settlement statement, which is commonly known as the HUD-1. This document provides an itemized listing of the funds that are payable at closing, such as loan fees and real estate commissions. When submitting a loan for sale to Farmer Mac, sellers are to follow the following general steps: Meet with the customer and prepare a preliminary approval loan package. Submit the package to Farmer Mac for preliminary approval. Following approval and lock in of interest rate, complete the appraisal and title work. Close and fund the loan. Deliver executed legal documents to Farmer Mac for final approval. After being notified of final approval, submit a Notice of Purchase Request at least 2 days before the desired purchase date. (Farmer Mac wires purchase funds on the basis of the seller’s instructions.) If the loan documents are properly completed and submitted in accordance with Farmer Mac guidelines, the entire loan process from submission to completion is expected to take about 8 business days. Farmer Mac officials expressed no concerns with their current approach of using lender or commercial off-the-shelf loan documents. They felt that the legal protections afforded by the inclusion of the Farmer Mac standard representations, warranties, and provisions were more important than the standardization of the forms. They also noted that the use of lender documents better supported the Swap Program since forms do not have to be redone. Also, the officials said that the current costs to Farmer Mac of achieving further standardization of loan documents exceed the benefits. The Farmer Mac guide specifies requirements for lender participation in Farmer Mac programs. The Farmer Mac guide requires participating lenders to follow certain standardized practices. For example, Farmer Mac requires inclusion of standardized representations and warranties (e.g., the seller is authorized to do business and loan information submitted to Farmer Mac is true and correct). In addition, the Farmer Mac guide specifies requirements for underwriting, collection of mortgage payments, administration of escrow accounts, and initiation of foreclosure proceedings. The guide includes nine underwriting standards, including those pertaining to obtaining a credit report and financial statements, the borrowers’ debt-to-asset ratio, and the loan-to-value (LTV) ratio for the financed agricultural property. The Farmer Mac guide provides some underwriting flexibility to recognize differences and variances in financial reporting of agricultural borrowers. It also provides flexibility for special loan-servicing practices associated with specific agricultural activities, such as livestock operations and properties with irrigation systems. The Farmer Mac guide includes sections on loan-making, -selling, and -servicing as well as seller requirements for participation in Farmer Mac programs. Individual chapters of the guide include credit and appraisal standards for various programs and guidelines for managing loan delinquencies. To provide perspective to the Farmer Mac guide, we compared it with the two previously mentioned Fannie Mae guides for servicing residential the single-family guide and the multifamily guide. The Fannie Mae guides differ from the Farmer Mac guide due to differences in the types of industry served and loan programs. For example, single-family residential mortgages generally are not commercial loans (i.e., most finance owner-occupied housing), while multifamily residential and agricultural mortgages are commercial mortgages. Fannie Mae’s multifamily guide includes separate sections for its delegated underwriting program and negotiated transactions. Under its delegated underwriting program, Fannie Mae delegates its authority to underwrite and determine the creditworthiness of a loan to the originating lender and agrees to purchase the loan without prior review. In return for this autonomy, the lender is to assume a percentage of the risk of default on the loan. In contrast, Farmer Mac generally takes on the full credit risk of backing AMBS. This practice conforms most closely to Fannie Mae’s negotiated transactions program. In this program, Fannie Mae, similar to Farmer Mac, provides some underwriting flexibility to recognize differences among multifamily properties in specifying lender obligations in transaction documents. The Fannie Mae and Farmer Mac guides share some similarities in the topics that they cover. For example, Fannie Mae’s single-family guide and the Farmer Mac guide, have sections on lender relationships, mortgage and property insurance, special mortgage programs, delinquent mortgages, and mortgage foreclosures. However, the Fannie Mae guides address each servicer requirement and guideline in greater specificity than the Farmer Mac guide does. Farmer Mac officials told us that they continuously work on further developing Farmer Mac’s guide as the corporation grows. Farmer Mac is expected to fulfill its public policy purpose and earn a profit by taking prudent risks. Like any other private financial firm, Farmer Mac faces risks from changes in market interest rates; loan defaults and other credit problems; external business factors, such as natural disasters or industry competition; and poor management decisions that may adversely affect its profitability. Farmer Mac uses risk management procedures in its operations to help ensure that its secondary market operations are conducted in a safe and sound manner. Farmer Mac has mechanisms in place to measure, monitor, and take actions to control, its exposure to these risks. On the basis of (1) unverified information provided by senior officers of Farmer Mac and its federal regulator and (2) reports and analyses done by third parties, such as external auditors and consultants, it appears that Farmer Mac generally manages its operations in ways that are consistent with industry risk management principles. For example, Farmer Mac strives to limit interest-rate risk by issuing AMBS in the capital markets and attempts to control losses from other risks, such as credit risk, through the monitoring of seller/servicer financial condition and servicing performance. Principles of risk management that have been developed by various financial industry and regulatory bodies stress the importance of board of directors and management involvement in managing the risks undertaken by financial institutions. Under these principles, an organization’s risk management strategy is to be based on a framework of responsibilities and functions, driven by the board of directors down to operating levels, which are to cover all aspects of risk. The basis for this principle is the belief that unless the board of directors is fully integrated into the risk management approach, the organization’s managers and employees will not be fully committed to risk management. To emphasize the importance of risk management, these principles state that a risk management group made up of senior managers is to be created. Farmer Mac’s risk management function is overseen by the Asset Liability Committee, which is made up of senior managers, and its Board of Directors’ Finance Committee. Like other portfolio lenders, Farmer Mac is exposed to interest-rate risk (the possibility of an increase in interest rates in the national economy that is not matched by an increase in interest rates paid by borrowers whose loans are held in portfolio by Farmer Mac). Farmer Mac employs several techniques to control interest-rate risk. Farmer Mac measures its exposure to interest-rate risk. Farmer Mac’s management and Board of Directors are to ensure compliance with its interest-rate risk policy limits. Farmer Mac also purchases financial instruments to help manage part of its exposure to interest-rate risk. Constant monitoring and adjustment of the control techniques are necessary to avoid increases in Farmer Mac’s exposure to interest-rate risk, which changes over time as the economy and its portfolio change. Farmer Mac is exposed to interest-rate risk on its portfolio of guaranteed securities and other investments and on loans purchased through the Cash Window Program. Measurement of interest-rate risk. The following two techniques are employed by Farmer Mac to measure interest-rate risk: duration gaps and market value of equity sensitivity. Duration gaps measure the average economic life of a whole portfolio, rather than the time to final payment for each asset or liability. The difference between a firm’s asset and liability durations is called its duration gap. The duration gap measures the overall interest-rate risk exposure of Farmer Mac. The larger the gap in absolute value, the greater Farmer Mac’s exposure to interest-rate risk. For example, if Farmer Mac’s average economic life of its assets is 1.5 years greater than the average economic life of its liabilities, then it has a duration gap of 1.5 years. Should interest rates rise, Farmer Mac’s net interest income would fall because interest expenses would rise sooner than interest income. Farmer Mac tries to manage interest-rate risk by managing its portfolio to keep the duration gap within a certain parameter. Another technique that Farmer Mac uses in measuring interest-rate risk is to estimate the sensitivity of its market value net worth to various changes in interest rates. Market value net worth provides a measure of Farmer Mac’s ability to absorb losses. Financial firms report their income statements and balance sheets according to generally accepted accounting principles (GAAP). GAAP relies primarily on the historical (book) value of the financial assets and liabilities, rather than on their current market value. The market value of such assets and liabilities is affected by current interest rates, but the market value also can change if the likelihood of prepayment or repayment changes. The market value of assets minus the market value of liabilities provides the market value of net worth. For example, Farmer Mac carries a 10-year, 8-percent loan at the amount of unpaid principal over the life of the loan. If interest rates decrease, the market value of the loan increases because the loan earns a higher yield than the yield on a new loan. Likewise, the market value declines if interest rates rise because the yield would be greater on a new loan. One method of monitoring a firm’s exposure to interest-rate risk is to regularly determine the market value of the firm’s assets and liabilities (marking-to-market) and project how market value would change for assumed changes in interest rates. Management of interest-rate risk. Once its interest-rate risk exposure is measured, Farmer Mac managers can change Farmer Mac’s exposure by various actions that lengthen or shorten the expected maturity of assets and liabilities so that payment streams on assets and liabilities behave similarly. The managers may issue liabilities with variable maturity terms callable debt allows Farmer Mac to repay its bonds after a specified time frame, which is a useful option if interest rates were to decline or levy prepayment penalties on borrowers who prepay their mortgages when interest rates fall. This is known as prepayment risk. Prepayment penalties allow Farmer Mac to offer competitive interest rates to farmers and ranchers and entice investors to accept lower rates of return on their investment in Farmer Mac-guaranteed securities. However, as indicated by agricultural lenders’ responses to our survey, prepayment penalties reduce the competitive attractiveness of Farmer Mac products as compared with agricultural loan products offered by FCS and other agricultural lenders without prepayment penalties. Management of interest-rate risk is important at Farmer Mac because of its investment portfolio and pipeline operations (loans submitted to Farmer Mac for approval and loans approved but not yet committed for purchase—i.e., locked in interest rate). Farmer Mac controls interest-rate risk associated with portfolio lending by striving to closely match the interest-rate sensitivity of its assets and liabilities and by requiring a yield maintenance provision for loans that are paid off earlier than their scheduled payoff date. In addition, interest-rate risk can be avoided by issuing and selling AMBS and shifting the interest-rate risk to investors. In its role as financial guarantor of AMBS, Farmer Mac does not directly undertake interest-rate risk. Also, through the use of sophisticated hedging techniques, such as futures contracts, Farmer Mac attempts to align the duration of its assets and liabilities, thereby minimizing interest-rate risk. Farmer Mac monitors interest-rate risk exposure through duration gaps and market value equity sensitivity reports that identify interest-rate mismatches. These two reports are provided to Farmer Mac’s Board of Directors on a regular basis. Credit risk is the possibility of financial loss resulting from default by borrowers on farming assets that have lost value and/or other parties’ failing to meet their obligations. Credit risk is inherent in the daily operations of all financial firms, including Farmer Mac. Like other financial firms, Farmer Mac’s underwriting standards represent a major tool in limiting credit risk. Farmer Mac uses several techniques to measure and manage its credit risk exposure. Measures of credit risk. Farmer Mac uses the two following basic measures of credit risk: (1) the volume of loans or bonds that are not performing according to the contractual agreement and (2) the dollar losses to Farmer Mac resulting from such nonperforming loans or bonds. Typically, when a borrower fails to make a scheduled payment, the loan is termed delinquent. Delinquency rates are an early indicator of credit problems. After a period of continuing delinquency, the loan servicer or Farmer Mac may act to recover the loan principal by foreclosing on the property and filing a claim with any party that insured or guaranteed the loan. At the time of foreclosure, the loan is said to have defaulted. Generally, only a small fraction of delinquent loans default. Farmer Mac said it monitors delinquency rates on a monthly basis, and its delinquency rates over the last 2 years have generally been less than 1 percent. The financial losses from defaults include any principal of the loan or bond not repaid, interest not paid, and expenses to foreclose or restructure,adjusted by recoveries from collateral sales and insurance. Defaults and loss rates can be predicted by Farmer Mac when it has historical default and loss data for similar types of loans in various economic circumstances. With new types of products, defaults and losses are difficult to predict accurately, and product performance must be monitored carefully to control credit risk. Farmer Mac only has a limited history of loan performance data, so it uses the historical loan loss data of a FCS institution to construct hypothetical loan pools to estimate losses on loan pools. The result is to be used to determine the level of guarantee fees that Farmer Mac needs for a pool of loans to generate sufficient income to provide an adequate return and maintain a minimum level of capital. Methods to manage credit risk. Farmer Mac manages credit risk by trying to control the number of defaults and minimize the losses that result from defaults. Farmer Mac controls defaults through credit underwriting, appraisal standards, and geographic and commodity diversification standards that provide a quality control over the credit risks it takes and help it to prevent defaults. To minimize losses from any defaults that do occur, Farmer Mac uses techniques called credit enhancements (e.g., USDA guarantees or collateral requirement) that should allow Farmer Mac to recover portions of its potential losses from collateral or from third parties, such as lenders, loan insurers, or loan guarantors. Underwriting standards. Farmer Mac has underwriting standards to determine which mortgages it will buy, that it could then hold as investments or place into mortgage pools. Underwriting is the process of identifying the potential risks of loss associated with financial activities to determine loan eligibility, and underwriting also aids in the pricing of such risks. Underwriting is an integral part of business and financial transactions that occur daily throughout the private and public sectors of the economy and involve the transfer and pricing of risk. Underwriting standards provide guidelines that are used to (1) limit the type and amount of risk of loss permitted in a financial portfolio and (2) establish methods to control such risks. Farmer Mac’s underwriting standards are discussed in appendix V. Before Farmer Mac purchases a loan or bond or guarantees a security, certain underwriting standards are to be met. Underwriting standards cover numerous borrower and property characteristics that help Farmer Mac evaluate the likelihood of defaults and the severity of related losses. For example, as stated in the Farmer Mac guide, Farmer Mac has underwriting standards that indicate (1) whether a borrower has sufficient income to make the scheduled payments and a credit history suggesting that the borrower has met past obligations in an acceptable manner and (2) the maximum LTV ratio, which measures the borrower’s equity (down payment) in the property. Experience has shown that borrowers with low amounts of equity in the property, and thus high LTV ratios, are more likely to default than borrowers with high amounts of equity. Farmer Mac has also established appraisal standards to estimate the value of the property serving as collateral for the mortgage and geographic and commodity diversification standards to mitigate Farmer Mac’s exposure to a particular agricultural region or commodity product. Because Farmer Mac does not make loans directly, standards are also to be used to qualify other parties to participate in its credit activities. For example, Farmer Mac has established standards for lenders, Central Servicers, and Contract Underwriters. Such standards include measures of financial strength, past performance indicators, and management quality. Lenders, Central Servicers, and Contract Underwriters expose Farmer Mac to risks of default to the extent that they fail to follow standards adequately when making the loan or fail to collect payments diligently. Farmer Mac has also established audit and quality control procedures to monitor the performance of lenders, Central Servicers, and Contract Underwriters. Farmer Mac also sets standards for firms with which they share financial risk. For example, when Farmer Mac enters a transaction to exchange cash flows as a means to limit its interest-rate risk, there is credit risk that the other party may fail to meet its obligation (i.e., counterparty risk). To mitigate such risk, Farmer Mac sets minimum standards of financial strength for such parties. Farmer Mac said it contracts out certain functions to take advantage of the experience and efficiency of outside resources. Two key functions that are contracted out are Farmer Mac’s loan-servicing function and credit loan- underwriting function. As discussed below, Farmer Mac is to mitigate the risks of contracting by performing annual on-site inspections of the third- parties’ operations for compliance with the terms of their agreements. Farmer Mac divides the loan servicing into two functions, the Central Servicer and the Field Servicer. The Central Servicer has entered into a contract with Farmer Mac to provide general servicing for certain Farmer Mac I loans and is responsible for directing the Field Servicer in the performance of such servicer’s duties. The duties of the Field Servicer include things such as maintaining borrower relationships, servicing the loans, annually inspecting the mortgaged property to detect any adverse trend in the property’s condition and preparing a report related to such inspection, and monitoring for current hazard insurance policy and tax and assessment payments. The Field Servicer is also to assist the Central Servicer in resolving delinquent loans. The Central Servicer is also responsible for establishing, maintaining, and monitoring delinquent loans. Farmer Mac annually performs an on-site review of the Central Servicer to make certain that it is in compliance with the terms of the contract agreement. In addition, the Central Servicer is required to have its independent public accountants review its servicing operations for compliance with Farmer Mac requirements. Contract Underwriters are entities that have entered into contracts with Farmer Mac to perform the function of underwriting loans in accordance with Farmer Mac’s underwriting and appraisal standards. Contract Underwriters are required to review appraisals to ensure compliance with the requirements set forth in the Farmer Mac guide. Farmer Mac annually performs on-site due diligence of the Contract Underwriter and checks for compliance with Farmer Mac requirements. Farmer Mac made modifications to address increased credit risk since passage of the 1996 Act. Before changes in Farmer Mac’s operating structure authorized by the 1996 Act, Farmer Mac was responsible for a loan’s credit loss in excess of 10 percent of outstanding loan principal. The 10-percent cash reserve or SPI required for every loan pool formed and securitized covered the first 10 percent in losses. As a result of its new legislative authority granted in 1996, to purchase agricultural mortgage loans directly from lenders and to issue and guarantee 100 percent of the securities backed by such loans without a lender cash reserve or SPI requirement, Farmer Mac is subject to a first loss position. To mitigate its increased credit risk position, Farmer Mac took the following steps Farmer Mac lowered its loan underwriting standard LTV ratio for a qualified loan from 75 percent to 70 percent for loans up to $2.3 million. This change requires borrowers to increase their down payment or risk sharing in the loan, thereby decreasing the chance that borrowers will default on their loans because of their larger equity stake. The LTV ratio is important in determining the probability of default and the magnitude of loss. Farmer Mac increased the annual pool fee rate it normally charges lenders for providing loan pool guarantees from 25 to 50 basis points—the maximum allowed by statute—of the initial principal loan amount. A portion of this fee is required by law to be set aside by Farmer Mac in a segregated account as a reserve against losses arising from its guarantee activities. Among other things, full recourse must be taken against such reserve before Farmer Mac may be authorized to draw upon its $1.5 billion line of credit with the Department of Treasury to satisfy its guarantee obligations. Farmer Mac established new loan loss reserves for Farmer Mac I loans securitized after 1996 (i.e., AMBS). Loan loss reserves represent the estimated amount necessary to cover anticipated credit losses in the loan portfolio. Farmer Mac I loans securitized before1996 had to be supported by the 10-percent cash reserve or SPI requirement. To mitigate the credit risk from new products, Farmer Mac requires the following: AgVantage bonds, which are general obligations of the issuer, are to be over-collateralized continuously by eligible collateral in an amount ranging from 120 percent to 150 percent of the bonds’ outstanding principal amount, depending on the financial status of the borrower. Eligible collateral includes qualified loans, cash, U.S. treasury securities, or securities guaranteed by an agency or instrumentality of the federal government. The eligibility for part-time farm loans, which are generally residential loans, is to be determined on the basis of lenders’ compliance with the underwriting standards that are used for conforming residential mortgages. Part-time farm loans are underwritten to conforming residential ratios a 28-percent inside ratio (monthly housing expense to gross monthly income) and a 36-percent outside ratio (total monthly debt expense to gross monthly income). Income may come from farming or nonfarming sources. Farmer Mac currently uses a credit-scoring model to monitor the credit quality of loans in pools to determine the financial performance of approved sellers and to determine if loan loss reserves are adequate. In addition, Farmer Mac stated that it uses credit scoring in connection with its credit approval process, but not as a determinative factor for credit approval. We defined business risk as the possibility of financial loss due to conditions within the agricultural sector that affect loan performance. For example, Farmer Mac has business risk associated with being limited to operating in the agricultural and rural housing lines of business. Business risk cannot be easily measured, and many business risk factors are difficult to anticipate and control. Farmer Mac is limited in its ability to manage its business risk exposure by legislation that requires it to serve a specific public mission. Farmer Mac’s charter requires that its activities be concentrated in the buying and selling of agricultural and rural housing loans across the nation and in good and bad economic conditions. This requirement prohibits Farmer Mac from seeking alternative business opportunities to supplement, diversify, or replace current business when economic conditions or the promise of higher returns would lead a private firm into other lines of business. However, Farmer Mac can shift assets into new products and investments within its given line of business. Even though diversification standards were eliminated by the 1996 Act, Farmer Mac requires its loan pools to be diversified both geographically and, with respect to agricultural commodities (products), to help it to avoid large exposure to regional economic shocks. Farmer Mac has established a standard for the maximum percentage that a region or commodity product can make up of the portfolio, which Farmer Mac said it periodically monitors for compliance. Management and operations risk (subsequently referred to as management risk) is the possibility of financial loss resulting from a management mistake that can threaten the company’s viability. In many respects, management risk encompasses all of the risks faced by Farmer Mac, including interest-rate, credit, prepayment, and business risks. For example, since Farmer Mac’s management establishes loan standards and financing policies, its decisions determine Farmer Mac’s exposure to credit and interest-rate risk. Generally, the managers can expose Farmer Mac to losses through incompetence, inadequate planning, poor internal controls, risky business strategies, fraud, and negligence. Management risk is not easily quantified, but its control is crucial to the firm’s successful operation. Farmer Mac generally controls its exposure to management risk through personnel administration, strategic and operational planning, its policymaking process, internal controls systems, management information systems, and board of directors and management oversight of firm operations. The dollar value of Farmer Mac’s loan purchases and the size of the secondary market have both increased since passage of the 1996 Act. Both trends represent positive indicators of progress in fostering secondary market developments. Even with this expansion, Farmer Mac-guaranteed securities and individual mortgage holdings accounted for about 1.2 percent of the agricultural mortgage debt outstanding as of the third- quarter of 1998. This compares to the approximately 16 percent of multifamily residential mortgage loans accounted for by the housing enterprises (Fannie Mae and Freddie Mac) as of year-end 1997. Our analysis shows that Farmer Mac is currently viable in its agricultural mortgage mission activities. However, Farmer Mac’s future viability depends on its growth potential in the secondary market for agricultural mortgages and the prospects for realizing that potential are unclear. There are trends and events that could improve or worsen Farmer Mac’s financial condition. If Farmer Mac develops new products that are attractive to lenders or if FCS institutions or other lenders increase participation in Farmer Mac programs, Farmer Mac’s financial condition could improve. However, events such as a less favorable interest-rate environment or declines in the credit quality of agricultural mortgage could reduce Farmer Mac’s future profitability. Even if Farmer Mac continued to be economically viable under its current operating structure, it is difficult to determine whether the public benefits created justifies continued government sponsorship. These public benefits could affect and be affected by the activities of two other GSEs FCS and the FHLBank System. These benefits and costs are difficult to quantify. Since the 1996 restructuring, two key measures show that Farmer Mac has made some progress in fulfilling its statutory mission by fostering secondary market development. Specifically, the dollar amounts of Farmer Mac’s loan purchases and issued securities have both increased since passage of the 1996 legislation. To foster development of a secondary market, Farmer Mac must be able to sustain growth in the purchase of loans over time. Loan purchase data provided by Farmer Mac are shown in table 2. Loan purchases have continued to grow in both of the Farmer Macs I and II Programs. Of particular importance is (1) the sustained growth in the Farmer Mac I Program and (2) data showing that the upward trend in this program has been greater than the growth of the Farmer Mac II Program. Total loan purchase dollar volumes have increased since the 1996 Act. The dollar value of Farmer Mac-guaranteed securities and loans held for securitization are also key indicators of secondary market development. After loans are purchased by Farmer Mac, they are grouped into packages, or pooled, and issued as Farmer Mac-guaranteed securities (i.e., AMBS). Farmer Mac either sells the securities to others or holds them in portfolio. Decisions to hold securities in portfolio or offer them for sale depend upon prevailing market conditions and are influenced by factors such as the market liquidity of the securities, the ability of investors to estimate risks of holding the securities, and general market knowledge about and acceptance of the securities. As shown in table 3, the amount of Farmer Mac-guaranteed securities outstanding has more than doubled since year- end 1995. Additionally, the amount held in portfolio has been fairly stable while the amount held by others has grown, which indicates a growing acceptance of AMBS, leading to a broader secondary market. Even though Farmer Mac’s operating results have been positive since its 1996 statutory changes, its secondary market penetration rate (i.e., percentage share of the agricultural mortgage market) remains small and is low compared to the penetration rate of the housing enterprises in the residential secondary markets. Interest income from nonmortgage investments is a significant source of income at Farmer Mac. In 1991, we reported that Farmer Mac’s authorizing legislation indicated that Congress expected Farmer Mac would be able to develop a large, nationwide secondary market quickly, and that it would be widely used. We also reported that the secondary market development to that point had been slow, and that the future was uncertain. While Farmer Mac’s penetration of the agricultural mortgage market has been growing, it remains relatively small. As shown in table 4, at year-end 1995 there were about $517 million Farmer Macs I and II securities outstanding. Agricultural mortgage debt outstanding at that time was about $84.8 billion; Farmer Mac’s market penetration was about 0.6 percent of this total market. Farmer Mac estimated that about half of the agricultural mortgage loans outstanding at that time met its underwriting standards. Thus, Farmer Mac’s penetration would have been about 1.2 percent of the agricultural mortgages meeting Farmer Mac underwriting standards. As of the third-quarter of 1998, Farmer Mac’s market penetration rate was about 1.2 percent of the agricultural mortgage loans outstanding and about 2.4 percent of those estimated by Farmer Mac as meeting its underwriting standards. Of the approximately $94 billion in total agricultural mortgage debt outstanding as of the third-quarter of 1998, about 31 percent was accounted for by FCS holdings, 29 percent by commercial banks, and 39 percent by other lenders, such as life insurance companies. The remaining approximately 1 percent was accounted for by Farmer Mac AMBS. Farmer Mac’s market penetration is low when compared to that of the housing enterprises. To provide perspective, we compared Farmer Mac’s penetration to the housing enterprises’ penetration of the conventional single-family (one- to four-unit) and conventional multifamily residential mortgage markets. As shown in figure 1, mortgage pools of the housing enterprises accounted for about 43 percent of conventional single-family and 16 percent of conventional multifamily residential mortgages outstanding as of year-end 1997. As of year-end 1980, about one decade after the housing enterprises were chartered as GSEs, the single-family market penetration rate was about 9 percent. The enterprises did not enter the conventional multifamily market until 1983. The multifamily penetration rate may provide a better comparison with Farmer Mac’s penetration rate, because multifamily mortgages, just as agricultural mortgages, are supported by income flows from commercial properties. During its first decade of operations, Farmer Mac had the disadvantage of a more limited charter in relation to the housing enterprises. However, compared to the housing enterprises in their early years, Farmer Mac also had the advantage in that securitization of a wide variety of financial assets had already been achieved. We recognize differences such as these in making our comparisons. We are not suggesting that Farmer Mac should mirror the market penetration levels achieved by the housing enterprises in their first decade of operations. Nor are we suggesting that Farmer Mac should be expected to reach the market penetration levels reached by the housing enterprises in the long-term. For example, the possibly greater heterogeneity of borrowing farm operators and of farm properties serving as collateral for agricultural mortgages, even in comparison to multifamily residential mortgages, could lead to a different long-term outcome. According to Farmer Mac’s 1996 annual report, Farmer Mac had achieved limited penetration into the agricultural mortgage credit market because of the (1) historical preference of lenders, particularly FCS institutions, to retain the loans in their own portfolios; (2) excess liquidity of many agricultural lenders; (3) disinclination of lenders to offer intermediate- adjustable term or long-term, fixed-rate loans as a result of higher profitability on short-term loans; and (4) lack of borrower demand for intermediate and long-term loans due to lower interest rates associated with short-term loans. Many of the factors are largely beyond the control of Farmer Mac. Opinions from our survey support the first two reasons cited for limited penetration into the agricultural mortgage market. For example, 66 percent of nonparticipant lenders said that choosing to hold loans in portfolio was a reason that contributed a “very great” or “great” extent to their decision not to sell loans to Farmer Mac. The reason of having adequate funding to meet agricultural mortgage loan demands was also noted by 64 percent of the nonparticipant lenders. Other reasons were considered to be significantly lower in importance, according to our survey. One particular change to Farmer Mac’s program, cited as encouraging participation by both approved sellers and nonparticipating lenders, was the elimination of or phasing out of the prepayment penalty. Among the approved sellers we surveyed, 80 percent said that eliminating the penalty, and 54 percent said phasing out the penalty, would encourage them to sell more loans to a “very great” or “great” extent. Among nonparticipating lenders, these figures were 47 percent and 35 percent, respectively for encouragement to participate in Farmer Mac programs. For both groups of respondents, eliminating or phasing out the penalty led the list of proposed changes that would encourage lenders to sell agricultural mortgage loans to Farmer Mac. However, phasing out or eliminating the penalty would increase prepayment risks faced by investors and, therefore, could lead them to demand higher interest rates charged borrowers for loans sold to Farmer Mac. In our survey, we did not ask lenders how much they thought borrowers would be willing to pay for eliminating or phasing out the prepayment penalty. In addition to its rate of market penetration, another indicator of Farmer Mac’s mission fulfillment would be a declining percentage of its nonmortgage investments compared to its agricultural mortgage-servicing portfolio. This would show that Farmer Mac was depending more on agricultural mortgages for viability and less on nonmortgage investments. In a previous report, we identified profits from nonmortgage investments (i.e., investments other than those in agricultural mortgages) as a primary source of income at Farmer Mac. These investments were part of Farmer Mac’s debt issuance strategy. According to Farmer Mac officials, this strategy has the stated purpose of increasing Farmer Mac’s presence in the capital markets and improving the pricing of its AMBS, thereby enhancing the attractiveness of the loan products offered through its programs for the benefit of agricultural lenders and borrowers. Farmer Mac officials told us that the strategy’s contribution to mission achievement should develop over a reasonable period of time. In doing our previous work, we voiced a concern that Farmer Mac’s temporary approach could become a permanent strategy to enhance profits even if it does not enhance Farmer Mac’s ability to purchase agricultural mortgages. Farmer Mac held about $1.2 billion in nonmortgage investments as of December 31, 1998. These investments were about 60 percent of Farmer Mac’s balance sheet assets and slightly more than the approximately $1.1 billion in Farmer Mac’s agricultural mortgage-servicing portfolio. Interest income from nonmortgage investments is a significant source of income at Farmer Mac. Although it is difficult to measure the overall benefits and costs associated with government sponsorship of Farmer Mac, a necessary condition for its overall benefits to exceed its cost is that Farmer Mac’s direct economic benefits be positive. That is, Farmer Mac would have to be profitable or economically viable in carrying out its mission. If Farmer Mac cannot be profitable in its mission-related activities with the implicit subsidy it receives from government sponsorship, it is not likely that it is providing enough public benefit with its existing charter to justify the potential cost the implicit financial subsidy may be imposing on the federal government. We constructed financial scenarios using various assumptions to help illustrate the relationship between Farmer Mac’s secondary market penetration and its long-term ability to sustain mission viability. We considered the possibility of unfavorable economic conditions leading to no growth as well as favorable economic conditions leading to substantial growth in Farmer Mac's secondary market penetration. To take into account the uncertainties regarding Farmer Mac’s future growth, we constructed two economic scenarios to help illustrate Farmer Mac’s ability to sustain mission viability. We define mission viability as the ability of Farmer Mac to generate a profit from its core business of operating a secondary market in agricultural mortgages and to provide a reasonable return to its investors. Farmer Mac is owned by its shareholders and its stock is publicly traded. In our analysis, viability is a long-term concept in which the time horizon is defined at a future point in time when Farmer Mac could eventually become characterized as a mature, rather than a newly created, growing institution. Just as growth is uncertain, the number of years necessary for Farmer Mac to eventually become a mature institution is uncertain. Farmer Mac has yet to pay dividends to its shareholders, but returns to shareholders have been generated by increases in Farmer Mac’s stock price. For long-term viability, our scenarios require cash flows that eventually compensate shareholders for the opportunity costs of their financial capital investments and associated risks. This requires shareholders to receive a rate of return that is competitive with other investments. In our scenarios, we assume that the average required return on equity equals FCS’ average return of 11.25 percent as of June 1998. The first scenario holds the outstanding amount of Farmer Mac AMBS constant near its current level of about $1.5 billion, and the second scenario doubles Farmer Mac AMBS to $3 billion. The first scenario was constructed to illustrate whether Farmer Mac could be viable in the event that its mortgage-servicing portfolio did not substantially grow. The second scenario was constructed to illustrate Farmer Mac’s viability if AMBS experienced a substantial increase, that is, if AMBS backed by agricultural mortgages doubled. The calculations are presented in appendix III. The scenarios do not represent forecasts of the future. In presenting these scenarios, we rely on publicly available data and make a number of simplifying assumptions. Our results were sensitive to alternative assumptions and to our reliance on annual 1998 Farmer Mac financial performance data. For example, the shares of Farmer Mac business were accounted for by pre-1996 Act guarantee activity; post-1996 Act guarantee and purchase activity; and Farmer Mac II activity was affected by choosing annual 1998, rather than the fourth-quarter of 1998, financial performance data. Specifically, the post-1996 Act Farmer Mac I activity involves relatively higher guarantee fees and greater credit risk than the other activities. Although the fourth- quarter 1998 statistics may provide a more accurate basis than annual statistics for estimating future trends in some variables, such as guarantee fees, the fourth-quarter statistics may reflect temporary rather than sustainable levels of some other variables, such as loan loss provisions. For future nonmortgage investment holdings, we distinguished between (1) investment securities and (2) cash and cash equivalents. As of December 31, 1998, Farmer Mac’s nonmortgage investment holdings were about $1.2 billion dollars, with $644 million accounted for by investment securities. On the basis of Farmer Mac’s view that the debt issuance strategy’s contribution to mission achievement should develop over a reasonable period of time, we arbitrarily reduced holdings of investment securities by half, or $322 million. Even with this reduction in investment securities, Farmer Mac investment securities would account for larger shares of balance sheet assets and mortgage-servicing portfolios than such investments for the housing enterprises. Cash and cash equivalents, which are short-term investments that can help Farmer Mac facilitate liquidity in the agricultural mortgage market, were kept constant at current levels. To calculate revenues for each scenario, we assumed that the split between AMBS held in portfolio and sold to investors would equal the percentage splits as of December 31, 1998. We also used the year-end 1998 levels to specify average AMBS guarantee fees, average gain on AMBS issuance, and average interest-rate spread between retained portfolio holdings (i.e., mortgage and nonmortgage investments combined) and debt costs. To determine expenses and opportunity costs for each scenario, we calculated Farmer Mac capital requirements on the basis of the current statutory minimum capital standards. We specified required return on equity on the basis of the annual 1998 return of 11.25 percent on equity for FCS. We assumed the average provision for loan losses to equal the year- end 1998 average. Some expenses were treated as variable (depending on size); we calculated these expenses using average operating costs. Other expenses that we assumed to be subject to economics of scale were held constant. We recognize that by assuming fixed operating costs, we may have understated Farmer Mac’s costs, particularly in scenario 2, which anticipates a substantial expansion in Farmer Mac’s agricultural mortgage purchases. Our scenarios also did not incorporate Farmer Mac corporate income tax liabilities that would have the effect of reducing after-tax corporate income. Results from the first scenario showed that Farmer Mac would have estimated revenues of $16.6 million and expenses of $15.6 million, or an economic profit of about $1 million. In the second scenario, Farmer Mac would have estimated revenues of $26.7 million and expenses of $20.4 million, or an economic profit of about $6.3 million. If we had developed scenarios with larger specified increases in Farmer Mac AMBS, estimated economic profits would have been greater than $6.3 million. However, our assumption of fixed operating costs would become more unrealistic in such a specified scenario. Farmer Mac’s nonmortgage investments affected the level of profitability in both scenarios. If we removed Farmer Mac’s investment securities from our scenarios, annual revenues would be reduced by about $2 million and required return on equity would be reduced by about $1 million for both scenarios. With these reductions, economic profit would become about zero (i.e., a breakeven level) in our first scenario and about positive $5.3 million in our second scenario. In our 1998 report, we questioned the need for a mature GSE to hold long-term nonmortgage investments to fulfill its statutory mission. The Department of the Treasury agreed with our assessment when it commented on our 1998 report. Farmer Mac’s potential for growth will be affected by its ability to provide benefits to commercial banks, FCS institutions, and other agricultural mortgage lenders. In addition, a number of other factors could have a major impact on Farmer Mac’s viability. These factors include the following: (1) changing economic conditions in the national and agricultural economies, (2) potential changes affecting participation by FCS institutions in Farmer Mac programs, and (3) risk-based capital standards to be promulgated by FCA. An important element in Farmer Mac’s growth potential is that it continues to take actions intended to provide benefits to agricultural lenders. Our survey suggests that continued growth is possible. Seventy-three percent of the approved sellers who participated in our survey said that they are likely to increase sales to Farmer Mac in the next 3 years. In addition, about one-fourth of the nonparticipants responding to our lender survey said that they expect to begin participating in Farmer Mac programs in the next 3 years. To the extent these lenders’ inclinations are carried out, they could enhance agricultural secondary market activity. Economic conditions in the national and agricultural economies can affect the size of the overall agricultural mortgage debt market. Farmer Mac’s rate of growth will be affected to some extent by the size of this overall market. While the residential mortgage market has grown, agricultural mortgage debt has declined. The 1997 constant dollar value of agricultural mortgage debt outstanding was slightly more than half of its 1980 value. If the decline in the constant dollar value of agricultural mortgage debt continues, it could directly affect Farmer Mac’s growth potential. Economic conditions in the agricultural and aggregate national economy can affect participants in the primary and secondary mortgage markets in other ways. For example, over the last 5 years, the economic environment faced by financial institutions has generally been favorable and interest rates have generally declined to relatively low levels. The agricultural economy has also been fairly strong. However, recent adverse trends in agricultural economic conditions, such as low commodity prices, reduced export demand, and weather-related problems in certain areas of the United States, have provided stress to the agricultural economy that could lessen the credit quality of agricultural mortgages. Over the past 2 years, delinquency rates on Farmer Mac I AMBS have generally been under 1 percent (except for the first quarter of 1998 when it was 1.15 percent). During the fourth quarter of 1998, FCS experienced an increase in loan losses and its provision for loan losses expanded dramatically. Farmer Mac’s ability to serve as a safety valve for the agricultural sector if FCS encountered difficulties has yet to be tested. However, one financial industry group we interviewed suggested that within the next few years, we may have a test for whether Farmer Mac could help in a situation similar to the one presented in the 1980s in which agricultural real estate prices plummeted, the credit quality of agricultural mortgages declined, and FCS essentially stopped making loans. Before Farmer Mac’s recent, long-term, standby purchase commitment transaction, FCS institutions had not been active participants in Farmer Mac programs. An expansion of FCS’ participation in Farmer Mac programs as a means to manage credit and interest-rate risks could help Farmer Mac’s business expand. Farmer Mac’s January 1999 transaction of $408 million with a FCS institution illustrates the use of a Farmer Mac program to manage credit risk, because Farmer Mac is providing a guarantee to the FCS institution in the event of borrower defaults. In providing this service, Farmer Mac has the ability to diversify its credit risk by purchasing agricultural mortgages throughout the nation. However, in terms of evaluating credit risk, Farmer Mac is currently at a disadvantage compared to primary market lenders who have personal relationships with borrowers and knowledge of their local economies to evaluate credit risk. In the future, credit scoring, which has recently been introduced to evaluate credit risk associated with commercial lending, could help Farmer Mac evaluate credit risk. Farmer Mac also has the ability to help lenders manage interest-rate risk. Increased demand by agricultural borrowers for long-term, fixed-rate agricultural mortgages could help facilitate growth in Farmer Mac’s business as FCS lenders seek to manage the potentially higher level of interest-rate risk. Conditions for such expansion differ somewhat from those for participation by commercial banks. Specifically, FCS institutions have access to national capital markets. Therefore, they can be better able to manage interest-rate risk without secondary market sales than lenders who rely on their deposit bases. Increased AMBS issuance itself could facilitate the role of two other factors that could in turn promote greater expansion. First, increased AMBS issuance can create the possibility that investors may obtain expanded historical information on AMBS cash flow performance, improve investors’ ability to evaluate risks from secondary market activity, lower yields they demand, and thus promote a further increase in secondary market sales. Second, increased AMBS issuance could help Farmer Mac realize economies of scale—a condition where average costs decline as output (i.e., in this case, loan purchase and other secondary market activity) increases. Farmer Mac is a relatively small corporation operating in a secondary market activity often characterized as exhibiting economies of scale. Therefore, the direct impact of expansions in Farmer Mac purchases could indirectly cause further expansion in the presence of economies of scale. In the future, the relative importance of FCS institutions and commercial banks in making agricultural mortgage loans could have an effect on Farmer Mac’s expansion. Commercial banks compete with FCS institutions in the primary mortgage market. However, because they are GSEs, FCS institutions (1) are less likely to rely on Farmer Mac to help them manage interest-rate risk than would commercial banks and (2) have been less likely to participate in Farmer Mac programs than commercial banks. If commercial banks continue to be more likely to sell their agricultural mortgages to Farmer Mac than FCS institutions, Farmer Mac’s expansion could be better served by an expansion in the share of agricultural mortgages originated by commercial banks. FCA has announced changes in its regulatory policies and practices that are intended to increase competition among FCS institutions. If this increased competition increases efficiency, improvements in the ability of FCS institutions to compete with commercial banks could result. If this, in turn, would lead to an increase in FCS’ share of the primary market for agricultural mortgages, Farmer Mac’s growth potential could be constrained. The purpose of establishing a risk-based capital standard for Farmer Mac is to help ensure that its capital is aligned with the risks of its financial activities, including potential risks to taxpayers. Congress has recognized the role of risk-based capital standards in mitigating the risk to taxpayers from GSE financial activities. FCA has a congressional mandate to establish risk-based capital standards for Farmer Mac no sooner than February 1999. FCA must develop a stress test that exposes Farmer Mac to statutorily specified interest-rate and credit stresses. For example, the credit stress must be based on the worst credit conditions experienced by a region of the country accounting for at least 5 percent of the nation’s population. FCA issued an advance notice of proposed rulemaking in 1998 seeking comments on its possible use of loan credit performance data for the Farm Credit Bank of Texas in developing the capital standard. FCA plans to issue a notice of proposed rulemaking in 1999 seeking comments on its proposed risk-based capital standards for Farmer Mac. As of December 31, 1998, Farmer Mac held regulatory capital of $80.7 million, $30.5 million in excess of its regulatory minimum capital requirement of $50.2 million. Farmer Mac has stated that it does not expect the risk-based capital standards will require it to raise additional capital. However, in the long-term, the risk-based requirements could become more difficult to meet and, under such circumstances, Farmer Mac may need to make adjustments to its book of business or raise more capital to meet the standard. In such a situation, shareholders would likely require compensation for any additional equity investments. In turn, Farmer Mac’s funding costs could rise, and, therefore, its growth could be reduced. This possibility, in which Farmer Mac may be called upon to raise capital to mitigate risk to taxpayers from Farmer Mac’s financial activities, illustrates that Farmer Mac’s viability under current capital standards is not necessarily the proper basis for judging the benefits and costs of government sponsorship. Government sponsorship of a financial institution can generate a number of public benefits and costs, which are difficult to quantify. The benefits Farmer Mac can generate in the agricultural mortgage market depend on whether its new loan programs and products help agricultural lenders manage risks in ways that improve loan terms offered to borrowers. Its potential costs depend on the likelihood that taxpayers may be called upon if Farmer Mac is unable to meet its obligations. The net benefits and costs also depend on how Farmer Mac’s activities interact with those of the two other GSEs FCS and the FHLBank System. Government sponsorship of a financial institution can generate a number of benefits. To the degree that lower funding costs and other benefits are passed on to borrowers in the affected financial sector, public benefits are generated. The special purpose charters can also provide GSEs with motivation to make investments that enhance efficiency in the affected financial sector. For example, GSEs that create secondary markets have incentives to make investments that facilitate standardization. In a 1996 report, we found that government sponsorship of the housing enterprises was associated with lower interest rates on single-family residential mortgages, and that the enterprises increased efficiency through greater standardization of mortgage products and processes. Government sponsorship of Farmer Mac has the potential, if it remains viable and continues to grow, to generate benefits through loan programs and products that help agricultural lenders manage risks. Government sponsorship also generates potential public costs. One potential cost is that taxpayers could be called upon if a GSE is unable to meet its financial obligations. Such a situation occurred in the late 1980s when FCS encountered financial difficulties. Opportunity costs can also be generated when the implied backing of certain financial institutions diverts funding from other financial institutions that may be able to serve the sector more efficiently. For example, the possibility is present that government sponsorship of Farmer Mac could reduce the incentives of other financial institutions to develop secondary market products of value to agricultural lenders. In addition, opportunity costs can also be generated when implied backing of financial institutions serving a specific sector diverts funding from other sectors. As previously discussed, one limitation on Farmer Mac’s growth could be its inability to reach a size sufficient to generate economies of scale. One approach to improving its growth potential could be to expand Farmer Mac’s charter beyond agricultural mortgages, for example, to other rural and agricultural loans. While such an expansion could increase the scope of potential benefits generated by Farmer Mac, it could also increase potential costs and could affect both FCS and the FHLBank System. The financial performance and benefits provided by FCS and the FHLBank System affects Farmer Mac and are affected by Farmer Mac’s charter authorities and activities. For example, Farmer Mac’s current programs and products provide an alternative-funding source for agricultural mortgage lenders, such as commercial banks that compete with FCS institutions in the primary mortgage market. The AgVantage Program competes with FHLBank advances to rural lending institutions. In July 1998, the FHLBank System’s regulator (the Federal Housing Finance Board) authorized mortgages on farm properties on which a residence is located and constitutes an integral part of the property, as collateral for advances received by FHLBank member institutions with total assets of $500 million or less. Given that there is currently a degree of overlap between Farmer Mac’s activities and the activities of other GSEs, any expansion of Farmer Mac’s charter would probably have effects on these other entities that would need to be taken into consideration. Removing Farmer Mac’s charter would eliminate potential benefits and costs resulting from its activities as a GSE. Elimination of Farmer Mac’s charter could affect the public benefits and costs associated with FCS and FHLBank System activities. Likewise, expansions in FCS lending to a wider variety of companies participating in the agricultural economy could create benefits. However, such expansion could increase the potential costs from government sponsorship of FCS, reduce agricultural loans made by depository institutions, and reduce agricultural mortgage loans sold to Farmer Mac. Expansion in the FHLBank System, such as the recent expansion to include certain agricultural mortgages as eligible collateral for obtaining FHLBank advances, may also limit Farmer Mac’s growth potential. In summary, charter revisions, regulatory changes, or other actions affecting the activities of each GSE in relation to agricultural and rural finance could in turn affect the financial performance and benefits generated by the other GSEs. The share of loans in a primary market that are sold by lenders in a secondary market depends on the benefits generated by the secondary market. Farmer Mac has used its post-1996 charter authorities to streamline the process for buying loans and to develop new programs and products that have provided an alternative funding source for some agricultural lenders. Farmer Mac has also standardized some aspects of the secondary market transaction by requiring participating agricultural mortgage lenders to make representations and warranties that their loans meet Farmer Mac underwriting standards, but Farmer Mac has not standardized loan documents because state laws governing agricultural mortgage loans and agricultural lending practices vary. In addition, Farmer Mac employs risk management techniques to measure and manage its various risks and to help ensure that Farmer Mac conducts its secondary market operations in a safe and sound manner. We noted that elements of Farmer Mac’s risk management techniques appeared to be generally consistent with industry risk management principles. Since its 1996 restructuring, Farmer Mac has made some progress in developing a secondary market in agricultural mortgages, but it currently has a relatively small market presence. Farmer Mac is a niche player as a secondary market entity in the agricultural mortgage market. It appears that Farmer Mac can be viable if it continues to expand, it experiences returns that are comparable to current levels, and economic conditions in the overall and agricultural economies of the nation remain stable. Even if Farmer Mac continued to be economically viable under its current operating structure, it is difficult to determine whether the public benefits created justify continued government sponsorship. The future benefits from government sponsorship of Farmer Mac are potentially limited by possible expansions of competing FHLBank funding alternatives and increased competitive pressures from FCS institutions. Therefore, the potential public benefits created from government sponsorship of Farmer Mac could be affected by legislative, regulatory, and other developments affecting the FHLBank System and FCS as well as Farmer Mac. Farmer Mac, FCS, and FHLBanks now offer programs that compete directly and indirectly with one another. Therefore, the public benefits and costs of these three GSEs are interrelated. Congressional committees with jurisdiction may want to consider interactions among the activities and the charters of these three GSEs as part of their ongoing oversight. We received comments on a draft of this report from Farmer Mac. These written comments are provided in appendix IV. Farmer Mac said, in general, it did not disagree with our statements on the background, history, and progress of Farmer Mac’s development. However, Farmer Mac disagreed with our (1) conclusion that it is difficult to determine whether the public benefits created justifies continued government sponsorship of Farmer Mac, (2) comparison of Farmer Mac’s secondary market penetration to the housing enterprises, and (3) characterization that Farmer Mac continues to rely on nonmortgage investments as a primary source of income. Farmer Mac stated, “The Report’s ultimate conclusion, that it is difficult to determine whether the public benefits provided by Farmer Mac justify continued government sponsorship, is inconsistent with the GAO’s findings and analyses regarding Farmer Mac’s economic viability and program development.” Farmer Mac agreed with the positive findings about Farmer Mac’s development and its economic viability and said that our ultimate conclusion conflicted with these findings. Farmer Mac also took issue with our conclusion that the activities of FCS and the FHLBank System affect the net public benefits provided by Farmer Mac. In addition, Farmer Mac stated that we should have but did not take into account its contribution to a more efficient agricultural credit market and the availability of a competitive supply of mortgage credit for agricultural borrowers. In this report, a number of factors contribute to our conclusion that it is difficult to determine whether the public benefits created justifies continued government sponsorship of Farmer Mac. First, although our analysis shows that Farmer Mac is currently viable in its agricultural mortgage mission activities, its growth potential in the secondary market for agricultural mortgage and the prospects for realizing that potential are unclear. Since its restructuring resulting from the 1996 Act, Farmer Mac has experienced a favorable interest-rate environment that has contributed to profitability for financial institutions in general. Perhaps of greater importance, its agricultural mortgage-servicing portfolio has not been subject to major credit stress, such as a prolonged increase in default rates. Therefore, its ability to manage interest-rate and credit risks under stressful conditions has not been tested since the 1996 Act. In addition, while Farmer Mac competes in various ways with FCS institutions and the FHLBank System, interaction between Farmer Mac and FCS institutions is subject to countervailing forces. On the one hand, as we explain in this report, FCS institutions have access to GSE-issued debt, and, therefore, FCS institutions may not have the same incentives as banks to sell mortgages to Farmer Mac to manage interest-rate risk. If banks remain more likely than FCS institutions to use Farmer Mac’s products, the possibility of an expanded presence in agricultural lending (i.e., directly or indirectly) by FCS or the FHLBank System, as explained in this report, could lessen potential benefits to be generated by Farmer Mac. On the other hand, Farmer Mac has completed transactions with FCS institutions, but its limited experience to date is not sufficient to establish the likelihood of any future trend for its business with FCS. In addition, a more expansive definition of eligible mortgages on farm properties as collateral for FHLBank advances has increased the potential for competition between Farmer Mac’s AgVantage Program with FHLBank advances. Since 64 percent of Farmer Mac approved sellers responding to our survey indicated that they were also members of the FHLBank System, the potential for overlap between GSE programs is significant. Because there is overlap in the three GSEs’ activities, it is not clear how much value is added by a given GSE’s existence that would not be generated by the others in its absence. Most importantly, viability is not necessarily the only proper measure of the benefits and costs of government sponsorship. At some places in its comments, Farmer Mac appears to imply that viability is sufficient to indicate that its public benefits are greater than its public costs, although the viability measure does suggest that broader benefits might result from increased lender competition and wider availability of credit. There are a number of public costs and benefits that are not included in the viability measure. For example, due to potential liabilities and opportunity costs associated with government sponsorship, it is possible that the public cost generated by Farmer Mac activities may exceed its private cost. One potential public cost is that taxpayers could be called upon if a GSE is unable to meet its financial obligations. Opportunity costs can also be generated when the implied backing of certain financial institutions diverts funding from other financial institutions that may be able to serve the sector more efficiently. These potential costs resulting from government sponsorship of Farmer Mac cannot be statistically estimated. Farmer Mac also stated that a broader standard than economic results, focusing on the secondary market’s contribution to increased lender competition and wider availability of agricultural mortgage credit, would be a more appropriate measure of public benefits than viability. To the extent that Farmer Mac develops unique programs and processes that improve the efficiency of agricultural mortgage markets, public benefits from such functions can exceed economic returns to Farmer Mac (i.e., spillover public benefits can be created). However, in activities where the GSEs provide similar or overlapping functions, market shifts among the GSEs are less likely to generate such spillover benefits. In the absence of statistical measures of lender competition and agricultural mortgage availability, these potential benefits also cannot be statistically estimated. In light of the measurement difficulties of the potential costs and potential benefits and the difficulty in predicting Farmer Mac’s growth potential, we reached the conclusion that it is difficult to determine whether the net public benefits resulting from government sponsorship of Farmer Mac justify continued government sponsorship. Farmer Mac said that our draft report misleadingly compared Farmer Mac’s market penetration to that of the housing enterprises during different time frames. Farmer Mac noted that the comparison in the draft report did not account for differences in operating charters, stage of development when the respective GSEs were created, and available resources to foster secondary market development. For these reasons, Farmer Mac stated that our report “contains no valid foundation for the finding that Farmer Mac’s market penetration at its early stage of development is low compared to the housing enterprises and all references to that effect should be deleted from the report.” In contrast to our conclusion, Farmer Mac stated, “…we believe the correct finding is that Farmer Mac’s 2% market penetration during its first three years of operations compares very favorably to the housing GSEs’ progress in the multifamily market, which is the more appropriate market for comparison with agricultural mortgages.” A major point of the section in the report containing these comparisons is that Farmer Mac’s penetration of the agricultural mortgage market is relatively small. As Farmer Mac stated in its 1998 annual report, its $1.3 billion of secondary market activity at December 31, 1998, represented only 1.5 percent of all outstanding agricultural mortgages. In comparing Farmer Mac’s penetration to that of the housing enterprises over the first decade of their operations as GSEs, we recognize differences in operating charters, stage of development when the respective GSEs were created, and available resources to foster secondary market development. However, we believe that these market penetration comparisons, especially with multifamily residential mortgages, provide useful perspective in analyzing Farmer Mac’s development. In making these comparisons, we were not suggesting that Farmer Mac should mirror the market penetration levels achieved by the housing enterprises in their first decade of operations. Nor were we suggesting that Farmer Mac should be expected to reach the market penetration levels reached by the housing enterprises in the long-term. For example, the possibly greater heterogeneity of borrowing farm operators and of farm properties serving as collateral for agricultural mortgages, even in comparison to multifamily residential mortgages, could lead to a different long-term outcome. We have revised the report to clarify our purpose in making these market penetration comparisons. Farmer Mac was established as a GSE in 1988, and the 1996 Act made Farmer Mac’s operating structure essentially the same as Freddie Mac’s and Fannie Mae’s. Fannie Mae and Freddie Mac became GSEs in 1968 and 1970, respectively. Prior to 1968, Fannie Mae was a government corporation. Freddie Mac was one of the first financial institutions in the nation to develop the ability to buy loans, form loan pools, and issue securities backed by loan pools. Fannie Mae began to issue securities backed by loan pools in the 1980s. As stated in this report, the housing enterprises did not enter the conventional multifamily market until 1983. In relation to differences in operating charters, Farmer Mac’s original charter was more limited than the housing enterprises’ charters in that it created the necessity to operate through third-party poolers and establish a mandatory reserve or subordinated interest in its guarantee function. The housing enterprises did not have these constraints. However, Farmer Mac also had advantages during its first decade of operations compared to the housing enterprises during their first decade of aspirations as GSEs. Farmer Mac had the benefit of learning from the experiences of the housing enterprises, because it began operations in 1988, after the housing enterprises had developed the securitization concept for residential mortgages. In the 1990s, Farmer Mac also had the benefit of observing and learning from the dramatic expansion in securitization of residential mortgages and other financial assets. While, Fannie Mae had been a government corporation before it was established as a GSE, this advantage was limited, because it became a GSE and continued as one for over a decade before it securitized residential mortgages. Regarding available resources to foster secondary market development, the housing enterprises initially focused their available resources on establishment of a secondary market in single-family rather than multifamily residential mortgage loans. In contrast to the housing enterprises lack of focus on multifamily mortgages, Farmer Mac, consistent with its statutory authority, has focused its available resources on establishment of a secondary market in agricultural mortgages. Farmer Mac said that our draft report incorrectly stated that interest income from nonmortgage investments continues to be a primary source of income at Farmer Mac. Farmer Mac said that net interest income from investments, including that from interest on cash and cash equivalents, was about one quarter of Farmer Mac’s total revenues in 1998. Farmer Mac also stated that its nonmortgage investments are part of its debt issuance strategy as part of a broad business strategy to achieve increased market presence for Farmer Mac securities. Farmer Mac stated that the draft report should be revised to reflect more accurately the purposes for its debt issuance strategy and note that nonmortgage interest income should be characterized as a minor source rather than a primary source of income. We relied on a number of statistical indicators for our analysis of interest income from nonmortgage investments. None of these indicators provided a precise measure of the percentage of Farmer Mac’s net income accounted for by nonmortgage investments, because data are not publicly available on the allocation of Farmer Mac’s interest and operating expenses among its various financial activities. In our 1998 report, we indicated that as of June 30, 1997, Farmer Mac’s nonmortgage investments of $931 million represented about 66 percent of Farmer Mac’s assets. In its comment letter on our 1998 report, Farmer Mac stated that Farmer Mac’s income from nonprogram investments represented about 38 percent of total net income. As of December 31, 1998, Farmer Mac held $1.18 billion in nonmortgage investments (including cash and cash equivalents) that accounted for about 61 percent of total assets. As indicated in our report, Farmer Mac AMBS held by other investors has grown dramatically, which would logically lessen the relative importance of nonmortgage investment income compared to earlier periods. Because of the difficulty in precisely determining the importance of interest income from nonmortgage investments, we now characterize it as a significant rather than a primary source of income at Farmer Mac. Farmer Mac has stated that the purpose of its investment policy is to increase its presence in the capital markets. In its comments, Farmer Mac stated that the number of investors purchasing Farmer Mac’s debt and mortgage-backed securities has increased significantly as the market acceptance and liquidity of the securities has improved. As we stated in our 1998 report, these developments could be beneficial to achieving Farmer Mac’s mission if they lead to benefits to Farmer Mac that are then passed on to borrowers in the form of more favorable loan terms. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 20 days after its issue date. At that time, we will send copies of this report to Representative Paul E. Kanjorski, Ranking Minority Member, of your Subcommittee; Senators Richard G. Lugar, Chairman, and Tom Harkin, Ranking Minority Member, of the Senate Committee on Agriculture, Nutrition and Forestry; Representatives Larry Combest, Chairman, and Charles W. Stenholm, Ranking Minority Member, of the House Committee on Agriculture; Senators Phil Gramm, Chairman, and Paul S. Sarbanes, Ranking Minority Member, of the Senate Committee on Banking, Housing and Urban Affairs; Representatives Jim Leach, Chairman, and John J. LaFalce, Ranking Minority Member, of the House Committee on Banking and Financial Services; Henry Edelman, President and Chief Executive Officer of Farmer Mac; and Marsha Pyle Martin, Chairman and Chief Executive Officer of FCA. We will also make copies available to others on request. Major contributors to this report are listed in appendix VI. Please contact me or William Shear, Assistant Director, at (202) 512-8678 if you or your staff have any questions. To help determine the potential market benefits from a government sponsored secondary market for agricultural loans, we surveyed all 263 financial institutions that were currently approved to sell loans to Farmer Mac (as of Oct. 1998) and a sample of 334 commercial banks and insurance companies (as of Oct. 1998) that were not currently approved sellers but had been targeted by Farmer Mac as program candidates. We asked officials at these financial institutions for their views on Farmer Mac programs and the secondary market for agricultural loans, their use of Farmer Mac services, and their behavior in the agricultural lending market. We conducted this mail questionnaire survey beginning in November 1998 and received 200 usable responses from approved sellers and 189 responses from nonparticipants by mid-February 1999. Our ideal target populations were current participants in Farmer Mac programs and comparable institutions not currently approved to participate in any Farmer Mac programs. The actual study populations we were able to survey were limited to those defined by available Farmer Mac records. We obtained a list of 263financial institutions that had been approved to originate or pool agricultural loans and then sell them to Farmer Mac. Farmer Mac also provided us with a list of 331 nonparticipating (not approved to sell loans to Farmer Mac) commercial banks that met the marketing criteria developed by Farmer Mac. These banks had been designated by Farmer Mac as banks with significant potential for becoming Farmer Mac approved sellers. To this list of nonparticipants, we added three large insurance companies that were active in agricultural mortgage lending, but were not Farmer Mac members. We chose to survey all 263 approved sellers and 334 nonparticipating institutions. No stratification or random probability sampling was used to select elements from the study populations. We created two self-administered mail questionnaires, one for approved sellers and another for nonparticipants. See appendix II for reproductions of the questionnaires and the results of the survey. To develop the questionnaires, we consulted officials from Farmer Mac and experts in the field of agricultural finance and asked them to review the draft questionnaires. We also conducted six pretest interviews by telephone with a variety of institutions selected from both survey populations. The information gathered from these sources was used to improve the structure of the questionnaires and the wording of individual questions and response choices on the questionnaires. We mailed questionnaires to our samples on November 12, 1998. For the approved seller survey, we addressed the questionnaires to the individuals identified as contacts in Farmer Mac’s records. For the nonparticipant survey, questionnaires were addressed to the President or Chief Executive Officer of the institution. Respondents were instructed to mail or fax their completed questionnaires. On December 3, 1998, we mailed replacement questionnaires to those who had not yet responded. On December 23, we sent an additional follow-up mailing to the remaining nonrespondents. In early February 1999, we selected 6 approved sellers from the 57 who had not yet responded, and 11 nonparticipants out of 105 who had not yet responded and telephoned them to determine their reason for nonresponse and to prompt them to return questionnaires. During our fieldwork, we discovered that 37 institutions on the nonparticipants’ sample frame were already represented on the approved seller sample and were in fact approved sellers. These duplicate cases were removed from further consideration as nonparticipant sample elements. When our fieldwork was concluded in mid-February 1999, we had received 200 usable approved seller questionnaires and 167 usable nonparticipant questionnaires. In addition, some of the nonparticipating institutions that did not return questionnaires told us that they did no agricultural lending, or reported that their answers were included in a questionnaire returned by another surveyed institution in the same bank holding company, which we counted as substantive responses. A total of 189 responses from nonparticipants were received. The final response rate was 77 percent for the approved seller survey and 66 percent for the nonparticipant survey. See table I.1 for a more complete description of the dispositions of our survey samples. Although we did not use any random probability sampling techniques to select our sample, and therefore our survey results are not subject to sampling error (imprecision in survey estimates caused by the natural variation that can occur among different possible samples of the same size), the practical difficulties of conducting any survey may introduce other types of errors. As discussed in the remaining text in this appendix, we took steps to minimize the extent of such errors. Surveys may be subject to coverage error. Coverage error occurs when the sampling frame does not fully represent the target population of interest. For our seller survey, Farmer Mac gave us a list of approved sellers as of October 1998, but we did not verify this list nor did we necessarily capture all sellers approved after that date but before our survey ended. Because the nonparticipant sample was uniquely defined as those institutions that Farmer Mac was targeting as possible candidates for membership, it would not be subject to coverage error as commonly defined. Measurement errors are defined as differences between the reported and true value of a characteristic under study. Such errors can arise from differences in how questions are interpreted by respondents, deficiencies in the sources of information available to respondents, the misreporting by respondents, or poorly designed questions. We received expert review of our survey questionnaire from a nationally recognized survey firm retained by Farmer Mac. We also conducted pretests with sampled respondents to minimize such measurement errors. Nonresponse error arises when surveys are unsuccessful in obtaining any information from eligible elements or fail to get valid answers to individual questions on returned questionnaires. To the extent that those not providing information would have provided significantly different information from those that did respond, bias from nonresponse can also result. Because the seriousness of this type of error is often proportional to the level of missing data, response rates are commonly used as indirect measures of nonresponse error. We took steps to maximize response rates, such as multiple mailings and telephone calls to convert nonrespondents. In addition, during telephone follow up with 17 nonrespondents, we asked them why they had not yet responded, and none of the answers indicated that they held beliefs that could be associated with extreme questionnaire answers that would differ substantially from those who did respond. Finally, surveys may be subject to processing error in data entry, processing, and analysis. We verified the accuracy of a small sample of keypunched records by comparing them to their corresponding questionnaires, and we corrected errors found. Less than 1 percent of the data elements we checked had random keypunch errors that would not have been corrected during data processing. In addition, we performed diagnostics to check the reliability of results during the processing and tabulation of survey data. Analysis programs were also independently verified. We did not, however, verify the substantive answers given by survey respondents. Would Farmer Mac have been viable in 1998 (profitable including a reasonable return to shareholders) at its current agricultural mortgage market level but with a 50-percent reduction in its investment security portfolio? The agricultural mortgage debt market remains the same size. The secondary market for agricultural mortgage securities outstanding remain constant at the December 31, 1998, levels. This includes $552 million held in Farmer Mac’s portfolio (on balance sheet) and $598 million held by others (off balance sheet). However, we have included the $408 million swap transaction announced in January 1999 and made adjustments to guarantee fee income and loan loss provisions for this transaction. Thus, the size of the secondary market for scenario I is $1.558 billion. The net yield for on balance sheet interest earning assets is 63 basis points for calendar year 1998. (Net interest income = $10.569 million for calendar year 1998. The average balance for interest earning assets = $1,682 million; average net yield on interest earning assets = $10.569 million /$1,682 million or 63 basis points.) Investment securities are reduced by 50 percent from the December 31, 1998, total of $644 million to $322 million. Cash and cash equivalents and loans held for securitization remain at the December 31, 1998, totals of $541 million and $168 million, respectively. Guarantee fees remain constant at the calendar year 1998 total of $3.727 million. Gain on the sale of Farmer Mac agricultural mortgage-backed securities (AMBS) securities remain constant at the calendar year 1998 total of $1.400 million. Miscellaneous income remains constant at the calendar year 1998 total of $0.142 million. Other expenses/loan loss reserves remain constant at the calendar year 1998 total of $9.323 million, except for an extra provision for the swap transaction. Average required return on equity is assumed to equal the Farm Credit System’s average return of 11.25 percent at June 1998. No capital above the minimum capital standards is retained. Amount (in millions) Net Spread (basis points) Guarantee fees: Gain on sale of AMBS: Estimated guarantee fee from Jan 1999 swap:Miscellaneous income: Annual Expenses (excludes interest expense netted out above) What would Farmer Mac’s situation be if it were able to double its December 31, 1998, market share from $1.558 billion (amount in scenario I) to $3.116 billion in outstanding Farmer Mac securities? The agricultural mortgage debt market remains the same size. The $3.116 billion in outstanding Farmer Mac securities consist of $1.104 billion on balance sheet and $2.012 billion off balance sheet. This is the same distribution between on and off balance sheets as used in scenario 1 but doubles the amounts of agricultural mortgage securities outstanding. All other assets are fixed at the same levels as in scenario 1. The net yield for on balance sheet interest earning assets is 63 basis points as in scenario 1. Revenues from guarantee fees, gains on AMBS issuance, and miscellaneous sources are doubled to reflect the doubling of the outstanding securities. Other expenses are considered fixed except for loan losses. Loan loss expense is doubled to reflect the doubling of the outstanding securities. Average return on equity is computed at 11.25 percent as in scenario 1. Amount (in millions) Net Spread (basis points) Guarantee fees: $3.727 million annually x 2 $1.346 million for swap x 2 Gain on sale of AMBS: $1.400 million annually x 2 Miscellaneous income: $0.142 million annually x 2 Annual Expenses (excludes interest expense netted out above) The following are GAO’s comments on the Farmer Mac letter dated May 10, 1999. 1. We discussed Farmer Mac’s accomplishments and growth throughout this report. 2. Farmer Mac suggested that instead of total assets, we use earning assets, a slightly smaller number, in constructing our scenarios on its future viability. Additionally, it was suggested that we use 63 basis points to calculate net interest income since it was the 1998 yield on the average balance of earning assets. These suggested changes have been incorporated into the scenario calculations. Farmer Mac also disagreed with the approach used in the economic scenarios in which we include return on equity as an opportunity cost. Farmer Mac referred to our approach as a novel concept not supported by generally accepted accounting principles. We measured long-term viability using an economic rather than an accounting definition of profit. This approach requires equity investors to receive a rate of return to compensate them for the opportunity cost of equity investment. The approach is based on accepted principles used in economics and finance. 3. In its comments, Farmer Mac stated that under both of our scenarios, Farmer Mac meets our viability test. Further, Farmer Mac said its track record is that of a growing and innovative company, and it reiterated the positive findings of our survey results. Among those who participated, 73 percent of the approved sellers said they are likely to increase sales to Farmer Mac in the next 3 years; and about one-fourth of nonparticipants expect to begin participating in Farmer Mac programs in the next 3 years. To put these findings in perspective, we have clarified our report to show that these findings represent inclinations of lenders’ future actions, which could enhance agricultural secondary market activity, but only to the extent that they are carried out. 4. Farmer Mac stated that a significant portion of the draft report is written with the use of the phrase “is to” as if to suggest that the matters under discussion are to be, but have not yet been, implemented. We use “is to” in cases where we have not verified the actions. In cases where we reviewed documents verifying that action had been taken, the text of the report has been changed to reflect that the respective action was taken. 5. Farmer Mac took issue with a statement that secondary market entities have relatively less ability than lenders to rely on borrower relationships to assess credit risk. The comment cited standards Farmer Mac has in place for underwriting, appraisal, and field servicers. We added qualifying language that is based on Farmer Mac’s comment. 6. Farmer Mac asked that we clarify its position concerning the standardization of loan documents. The President and Chief Executive Officer of Farmer Mac stated that it is not Farmer Mac’s position that further standardization lacks merit, but that the costs to Farmer Mac of achieving further standardization of loan documents exceed the benefits of doing so at this time. We have incorporated this position into the report. 7. Farmer Mac stated that it monitors delinquency rates on a monthly rather than a quarterly basis. The report now reflects this change. Additionally, in commenting on our report, Farmer Mac referred to our statement that “Farmer Mac does not have a history of loan performance data, so it uses the historical loan loss data of a FCS institution” for loss estimation purposes, and noted that it does maintain an extensive loan information database on all loans it purchases. We added qualifying language to our report to address this comment. 8. Farmer Mac took issue with our statement that it contracts out certain functions where it lacks in-house expertise. In its response, Farmer Mac stated that it was a business decision to contract out certain functions to take advantage of the experience and efficiency of outside resources. The report has been revised accordingly. 9. In its comment letter, Farmer Mac noted that the draft report incorrectly stated that Farmer Mac does not incorporate credit scoring into its loan approval process. We revised the text to note that Farmer Mac does use credit scoring in connection with the credit approval process, but not as a determinative factor for credit approval. 10. Farmer Mac took issue with our statement that if the decline in the constant dollar value of agricultural mortgage debt continues, it could directly affect Farmer Mac’s growth potential. Farmer Mac stated the opinion that it would have significant opportunity even if decline in agricultural debt occurred. Farmer Mac’s rate of growth could be affected to some extent by the size of the overall market. On the basis of this premise, we have not changed this language. Underwriting standards are to be used by Farmer Mac to determine which mortgages it will buy, which it could then choose to hold as investments or place into mortgage pools. Generally, eligible loans must meet each of the standards. The standards are meant to limit the risk that the mortgages will create losses for the pools or Farmer Mac by ensuring that the buyer has the ability to pay; the buyer is creditworthy and is likely to meet scheduled payments; and, in the event of default, the value of the agricultural real estate limits any losses. Farmer Mac requires lenders to provide representations and warranties to help ensure that the qualified loansconform to these standards and other requirements of Farmer Mac. Farmer Mac’s underwriting standards have elements (e.g., factors such as past credit history and current and projected income and expenses that reflect the potential borrower’s willingness and ability to repay the loan) that are similar to the standards in the housing secondary markets. The underwriting standards are based on credit ratios, other quantitative measures, and qualitative terms. Farmer Mac’s underwriting standards, by law, may not discriminate against small agricultural lenders or small loans of at least $50,000. Farmer Mac has nine underwriting standards for newly originated loans, each of which is summarized below. A newly originated loan is one that has been originated for less than a year. Standard 1: Creditworthiness of the Borrowers. Standard one confirms the 5Cs (character, capital, capacity, collateral, and conditions) of credit that are to be involved in each loan and requires loan originators to obtain complete and current credit reports for each borrower. The credit report must include historical experience, identification of all debts, and other pertinent information. All sellers are required to verify all information contained in the credit report. Standard 2: Balance Sheets and Income Statements. This standard requires the loan applicant to provide fair market value balance sheets and income statements for at least the last 3 years. Standard 3: Debt-to-Asset (or Leverage) Ratio. The entity being financed should have a pro forma debt-to-asset ratio of 50 percent or less on a market value basis. The debt-to-asset ratio is calculated by dividing pro forma liabilities by pro forma assets. A pro forma ratio shows the impact of the amount borrowed on assets and liabilities. Standard 4: Liquidity and Earnings. The entity being financed should be able to generate sufficient liquidity and net earnings, after family living expenses and taxes, to meet all debt obligations as they come due over the term of the loan and provide a reasonable margin for capital replacement and contingencies. This standard is achieved by having a pro forma current ratio of not less than 1.0; and a pro forma total debt service ratio of not less than 1.25, after living expenses and taxes. The current ratio is calculated by dividing pro forma current assets by pro forma liabilities. Total debt service coverage ratio is calculated by dividing net operating income by annual debt service. Net income from farm and nonfarm sources may be included. Standard 5: Loan-to-Value (LTV) and Cash Flow/Debt Service Coverage Ratio. The LTV should not exceed 70 percent in the case of a typical Farmer Mac loan secured by agricultural real estate, 75 percent in the case of qualified facility loans, or 85 percent in the case of part-time farm loans, with private mortgage insurance coverage required for amounts above 70 percent. A minimum debt service cash flow ratio of not less than 1.0 from the subject real estate securing the loan is required, except for loans in which the borrower’s principal residence is on the property securing the loan. The pro forma total-debt service coverage ratio of the entity to be financed must not have been less than 1.50 for the last 3 years. The LTV ratio is important in determining the probability of default and the magnitude of loss. Standard 6: Minimum Acreage and Annual Receipts Requirement. Agricultural real estate must consist of at least five acres or be used to produce annual receipts of at least $5,000 to be eligible to secure a qualified loan. Standard 7: Loan Conditions. The loan (1) must be at a fixed payment level and either fully amortize the principal over a term not to exceed 30 years or amortize the principal according to a schedule not to exceed 30 years and (2) mature no earlier than the time at which the remaining principal balance (i.e., balloon payment) of the loan equals 50 percent of the original appraised value of the property securing the loan. The amortization is expected to match the useful life of the mortgaged asset and payments should match the earnings cycle of the farm operations. For facilities, the amortization schedule should not extend beyond the useful agricultural economic life of the facility. Standard 8: Rural Housing Loans Standards. Farmer Mac has adopted the credit underwriting standards applicable to Fannie Mae, adjusted to reflect the usual and customary characteristics of rural housing. These standards include, among other things, allowing loans secured by properties that are subject to unusual easements, having larger sites than those for normal residential properties in the area, and having property that is located in areas that are less than 25 percent developed. Standard 9: Nonconforming Loans. On a loan-by-loan determination, Farmer Mac may decide to accept loans that do not conform to one or more of the underwriting standards or conditions, with the exception of standard 5. Farmer Mac may accept those loans that have factors (i.e., compensating strengths) that outweigh their inability to meet all of the standards. Examples of compensating strength include substantial borrower net worth or a larger borrower down payment. The granting of standard 9 exceptions is not intended to provide a basis for waiving or lessening in any way Farmer Mac’s focus on buying only high-quality loans. According to a Farmer Mac official, nonconforming loans currently comprise about 10 percent of the loans approved for sale to Farmer Mac. In addition to the previously listed underwriting requirements, the 1999 maximum loan size to a single borrower is limited to $3.5 million for loans secured by more than 1,000 acres and $6 million for loans secured by 1,000 acres or less. The maximum size of an individual loan is indexed to the rate of inflation and is changed annually by Farmer Mac. Farmer Mac views the history of loan repayment as an indicator of the operation’s profitability and the borrower’s willingness to repay the loan on time. As a result, Farmer Mac has developed loan criteria for seasoned loans. A seasoned loan is a loan that was originated at least 1 year before purchase and has completed at least one full installment of principal and interest payments. The degree of re-underwriting required is dependent on the age of the loan and its updated LTV. If a loan is less than 5 years old, with an updated LTV of less than 60 percent and on which the borrower has paid on time since origination, the loan is eligible for sale to Farmer Mac if it met Farmer Mac standards at the time of origination. If a loan is over 5 years old with a current LTV equal to or less than 60 percent, and the borrower has paid on time for each of the last 3 years, no underwriting analysis is required and the loan is eligible for sale to Farmer Mac. Seasoned loans with an updated LTV of greater than 60 percent must be re-underwritten to meet all of Farmer Mac’s standards. Farmer Mac reserves the right to verify the credit quality and performance characteristics of seasoned loans. Facility loans are loans made to specialized facilities such as dairies, feedlots, packing facilities, storage units, grow-out facilities (poultry and hog), and processing buildings. To qualify as a specialized agricultural facility, the currently appraised value of the buildings must exceed 60 percent of the total appraised value of the property. All facility loans must comply with the previously listed credit standards for newly originated loans. In addition, they must meet certain requirements depending on the type of facility loan and on whether the borrower has a contractual relationship with product users. For example, the maximum LTV for hog and poultry facilities is 75 percent, whereas the maximum LTV for agribusiness facilities is 65 percent. Another example is where a poultry facility has a production contract or other credit enhancement with a financially strong product user, then the Farmer Mac underwriting thresholds are a maximum LTV ratio of 75 percent, a maximum debt-to-asset ratio of 65 percent, and a minimum total debt service coverage ratio of 1.25 to 1. Under a scenario without a credit enhancement, the thresholds would be a maximum LTV ratio of 65 percent, a maximum debt-to-asset ratio of 50 percent, and a minimum total debt service coverage ratio of 1.35 to 1. The difference in the underwriting requirements reflects the presumed ability that the loan can be repaid from a financial source not tied to the mortgaged property. For part-time farm loans (a loan designed for borrowers who live on agricultural properties but derive a significant portion of their income from nonfarm employment) the requirements concerning acreage and annual agricultural income are the same as for the full-time program. The property must contain a single-family detached residence that should constitute at least 30 percent of the total appraised value of the property. Because part-time farmers and part-time farms have much in common with conventional residential lending, this type of loan is underwritten according to conforming residential housing loans standards (i.e., 28 percent of monthly housing expense to gross monthly income and 36 percent of total monthly debt expense to gross monthly income). The maximum LTV for a part-time farm loan is 85 percent; private mortgage insurance is required on any part-time farm loan with an LTV greater than 70 percent. The maximum loan size is limited to $2.3 million, but there is no minimum loan size and no maximum acreage. Compensating factors, such as substantial borrower net worth or the borrower’s making a large down payment, allow Farmer Mac to approve loans that vary from the standards. Rachel DeMarcus, Assistant General Counsel The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the progress that the Federal Agricultural Mortgage Corporation (Farmer Mac) has made in achieving its statutory mission and examined its future viability, focusing on: (1) actions taken by Farmer Mac to promote the development of a secondary market, including the introduction of new programs and products; the standardization of loan processes, including loan documents and underwriting standards; and the use of risk management techniques to facilitate safe and sound secondary market activities; and (2) Farmer Mac's future viability and the potential benefits and costs of a government-sponsored secondary market for agricultural mortgages. GAO noted that: (1) in an attempt to make the secondary market in agricultural mortgages an attractive alternative for lenders, Farmer Mac has: (a) used its enhanced charter authorities to develop new programs and products and streamlined the process for buying loans; (b) standardized certain aspects of the loan processes, such as underwriting; and (c) developed risk management techniques to facilitate safe and sound secondary market activities; (2) while these efforts have increased secondary market activity, Farmer Mac's share of the overall agricultural mortgage market remains small, about 1.2 percent; (3) since its 1996 restructuring, Farmer Mac has introduced programs to directly purchase agricultural mortgages from lenders and to exchange agricultural mortgage-backed securities for mortgage loans held by lenders; (4) Farmer Mac also recently introduced a program (called AgVantage) through which it provides to agricultural lenders loans that are based on agricultural mortgage collateral; (5) Farmer Mac has standardized some aspects of secondary market transactions by requiring participating lenders to attest that their loans meet Farmer Mac underwriting standards; (6) Farmer Mac has not developed standardized loan documents because it believes the cost would be prohibitive given the state-by-state variability of laws governing agricultural mortgages; (7) Farmer Mac purchased futures and options to help manage the interest-rate risk of those loans it held in its portfolio, and its risk management techniques appeared to be generally consistent with industry risk management principles; (8) it appears that Farmer Mac could continue to be viable if: (a) its recent rate of expansion is maintained; (b) it continues to experience rates of return that are comparable to current levels; and (c) economic conditions in the national and agricultural economies remain stable; (9) events such as a less favorable interest-rate environment or declines in the credit quality of agricultural mortgages could reduce Farmer Mac's future profitability; (10) one important determinant of the net benefits generated by Farmer Mac is the extent to which its activities compete with or complement those of other government sponsored enterprises (GSE); and (11) because there is potential for mission overlap among Farmer Mac, Farm Credit System (FCS), and the Federal Home Loan Bank (FHLBank) System, new or expanded activities by one of these entities can affect the benefits generated by the other two.
PPACA established minimum MLR standards for insurers offering group or individual health insurance coverage using a new MLR formula that differs from the way MLRs have traditionally been calculated. To implement the PPACA MLR provisions, HHS issued an interim final rule that provided specific definitions and methodologies to be used in calculating the new MLRs, and that addressed other areas, including adjustments to the MLRs to address the circumstances of certain types of plans, and oversight and enforcement. Insurers will begin reporting PPACA MLRs to HHS in June 2012. In the private health insurance industry, the MLR is a commonly used indicator, measuring the proportion of premium dollars an insurer used for medical claims, as opposed to other functions, such as marketing, actuarial activities, or profit. While many states have minimum MLR standards or MLR reporting requirements, PPACA established federally required minimum MLRs for insurers operating in the individual and group insurance markets. The MLR formula specified in PPACA differs from the way MLRs have traditionally been defined. The traditional MLR is generally calculated by dividing an insurer’s medical care claims by premiums. In the PPACA MLR formula, the numerator includes insurers’ expenses for activities that improve health care quality—such as patient-centered education and counseling, care coordination, and wellness assessments—in addition to claims. Further, the denominator of the PPACA MLR subtracts from insurers’ premiums all federal taxes and state taxes and licensing or regulatory fees (see fig. 1). In addition to establishing the new MLR formula, PPACA directed NAIC to establish recommended definitions and methodologies for calculating MLRs, subject to certification by the Secretary of HHS. NAIC submitted its recommendations to HHS on October 27, 2010, and HHS issued its interim final rule implementing the PPACA MLR requirements in PPACA on December 1, 2010, with an effective date of January 1, 2011. According to HHS, the interim final rule adopted NAIC’s recommendations in full, and included the following key areas.  Activities that improve health care quality. These include activities designed to increase the likelihood of desired health outcomes in ways that can be objectively measured. The activities must be primarily designed to: (1) improve health outcomes; (2) prevent hospital readmissions; (3) improve patient safety; (4) implement, promote, and increase wellness and health activities; and (5) enhance the use of health care data to improve quality, transparency, and outcomes. Insurers are also allowed to include health information technology (IT) expenses needed to accomplish activities that improve health care quality. Also specified were certain activities that do not qualify as those that improve health care quality, such as provider credentialing.  Federal and state taxes and licensing or regulatory fees. These include all federal taxes and assessments, excluding taxes on investment income and capital gains.  Levels of aggregation for MLR reporting. Insurance companies are required to report MLRs separately for their individual, small group, and large group markets for each state in which they are licensed to operate.  Credibility adjustments. All insurers experience some random variability in their claims, where actual claims experience varies from expected experience. The impact of these deviations is less for health plans with a larger customer base. To help address the disproportionate impact of claims variability on small health plans, adjustments to MLRs are permitted for these plans. Specifically, MLRs for plans with less than 1,000 life years will be considered “noncredible” and will be presumed to meet the MLR requirements;  1,000 to less than 75,000 life years will be considered “partially credible” and may receive an upward adjustment ranging from 1.2 to 8.3 percentage points, depending on size, and a further adjustment if they have high deductibles; and  75,000 life years or more will be considered “fully credible” and will not receive an adjustment. HHS estimates show that for 2011 a small fraction of insurers that offer plans in the individual, small group, or large group markets would be considered fully credible, but these insurers account for the majority of the total life years covered by these types of plans. About half of insurers that offer plans in the small and large group markets and a little less than a third of insurers that offer plans in the individual market would be partially credible and could apply a credibility adjustment.  Years of data to include in calculating the MLR. Beginning in 2013, insurers’ MLRs will be calculated based on a 3-year period of the accumulated experience for the current reporting year and the 2 preceding years. Because insurers will not have 3 years of MLR data for 2011 and 2012, MLRs for these years will be calculated as follows: (1) MLRs for 2011 will be calculated on their experience for 2011, (2) MLRs for plans that are fully credible in 2012 will be calculated based on their experience for 2012, and (3) MLRs for plans that are partially or noncredible in 2012 will be calculated based on accumulated experience from 2011 and 2012. HHS’s interim final rule also addressed areas that NAIC did not specifically include in its recommendations, which focused primarily on the definitions and methodologies used for calculating the MLR. Some key areas are summarized below.  Treatment of agents’ and brokers’ commissions and fees. HHS explicitly listed agents’ and brokers’ commissions and fees as nonclaims expenses. NAIC did not include any special treatment of these expenses in its recommendation to HHS, but raised concerns about the potential impact of the MLR requirements on the ability of these professionals to continue assisting consumers. HHS officials have continued to discuss this issue with NAIC. Legislation has also since been introduced to deduct agents’ and brokers’ fees from premiums in the MLR calculation.  Adjustment to the standard for a state’s individual market. In addition to providing for credibility adjustments, PPACA provided HHS with the authority to adjust the MLR standard for the individual market in a state if it determines that the application of the standard may destabilize the individual market in that state. Although NAIC’s recommendations to HHS did not specifically address adjustments to the MLR standard for the individual market, NAIC did raise concerns about the ability of many insurers to readily achieve an MLR of 80 percent. In the interim final rule, HHS established a process for states to apply for an adjustment to the MLR standard for the individual market in that state that included the information states must provide in their applications and the criteria HHS would use to assess the applications. As of July 25, 2011, 12 states and 1 territory had applied to HHS for an adjustment; HHS had granted an adjustment of the MLR in 5 states, did not grant an adjustment in 1 state, and was in the process of reviewing the remaining applications.  Oversight. HHS is responsible for direct enforcement of the reporting and rebate provisions of the MLR requirements, including that the reports are submitted timely, that the data comply with the definitions in the regulations, and that rebates are paid timely and accurately. The interim final rule provides a framework through which HHS may conduct audits to determine insurers’ compliance with the provisions and provides that HHS may, in its discretion, accept the findings of audits that state regulators may conduct of an insurer’s MLR reporting and rebate obligations, as long as specified conditions are met. The interim final rule also provides for the imposition of civil monetary penalties if insurers fail to comply with the requirements. HHS received public comments on the interim final rule from representatives of the insurance industry, consumers, state regulators, and others, covering a wide range of topics, such as the treatment of agents’ and brokers’ fees, the methodology for determining credibility adjustments, and the treatment of taxes. According to HHS officials, HHS has not determined when it will issue a final rule. PPACA MLRs will be reported to HHS every June and will reflect insurers’ experiences from the previous calendar year. The first set of these data will be submitted to HHS in June 2012, reflecting insurers’ experiences from 2011. In April 2011, insurers reported MLRs to NAIC using the PPACA MLR definition based on their 2010 experience. These data are not subject to the PPACA MLR provisions and will not be adjusted to account for credibility or other issues addressed in the provisions. Traditional MLR averages generally exceeded the PPACA MLR standards in each market, even without the changes in the new PPACA MLR formula that will generally further increase MLRs. However, traditional MLRs also varied among insurers, particularly among those in the individual market and smaller insurers. Since traditional MLRs were calculated differently than they will be under the PPACA requirements, it is difficult to predict, based on these data, what insurers’ MLRs would have been using the PPACA formula, or to predict the MLRs that insurers’ will report in the future. From 2006 through 2009, insurers’ traditional MLR averages generally exceeded the PPACA MLR standards—80 percent for the individual and small group markets and 85 percent for the large group market. This is even without the new PPACA MLR formula definitions or credibility adjustments that will generally further increase MLRs reported under the PPACA requirements. The average traditional MLRs reported for 2006 through 2009 were also relatively stable for all markets (see table 1). Since traditional MLRs were calculated differently than they will be under the PPACA requirements, it is difficult to predict, based on these data, what insurers’ MLRs would have been using the PPACA formula, or to predict the MLRs that insurers’ will report in the future. While traditional MLRs on average generally exceeded the PPACA MLR standards from 2006 through 2009, they varied, particularly in the individual market. For example, figure 2 shows that in 2009 traditional MLRs in the individual market were more widely distributed than those in the small and large group markets. Within this variation in the individual market, a larger proportion of insurers generally had lower MLRs; that is, they spent a lower percentage of their premiums on medical claims, as compared to insurers in the small and large group markets. Under PPACA, states may request adjustments to the MLR standard for the individual market if application of the standard may destabilize that market, for example, by causing insurers to exit the market, such that insurance options are limited in the state. Annual fluctuations in insurers’ traditional MLRs were also greater for insurers in the individual market. For example, 70 percent of insurers in the individual market experienced an average annual change in their traditional MLRs of more than 5 percentage points from 2006 through 2009, compared to 46 percent in the small group market and 39 percent in the large group market. Almost 12 percent of insurers in the individual market averaged annual changes greater than 20 percentage points, compared with about 4 percent of insurers in both the small group and large group markets. Beginning in 2013, insurers will calculate their PPACA MLRs based on 3 years of data, which could partially mitigate the impact of variations often experienced by insurers from year to year. Traditional MLRs were also more varied for smaller insurers in all three markets from 2006 through 2009. For example, figure 3 shows that in 2009 traditional MLRs for smaller insurers were more widely distributed than those for larger insurers, with a higher percentage of smaller insurers generally reporting lower MLRs. The credibility adjustments in PPACA allow smaller insurers to upwardly adjust their MLRs. Data for figure 3 are aggregated across all states that an insurer operates in. However, since insurers are required to report their PPACA MLRs at the state level, it is likely that in 2011 and beyond, more insurers will have less than 75,000 life years in a market at the state level and will be eligible for a credibility adjustment. The insurers we interviewed said their PPACA MLRs will be affected by changes in the MLR formula, primarily due to the deduction of taxes and fees in the denominator, and to a lesser extent, the addition of expenses for activities to improve health care quality in the numerator. Insurers also said that the PPACA MLR requirement to report MLRs by state will affect their PPACA MLRs. Insurers said they expect the precision of their PPACA MLR data to improve in 2011 and beyond, in part because their 2010 MLRs were based on best estimates. Most of the insurers we interviewed reported that the deduction of taxes and fees in the denominator of the PPACA MLR formula would contribute to the largest change in 2010 MLRs compared to the traditional MLR formula, but some insurers said the effect of the deductions vary by state and may vary in 2011 and beyond. One insurer told us that the effect of deducting taxes and fees for their 2010 MLRs was more than double the effect of including their expenses for activities to improve health care quality in the numerator. Another insurer told us that the effect of taxes and fees would vary by state because state taxes, such as premium taxes and other state assessments, can vary. Further, one insurer said that while the deduction of taxes and fees was the largest component that affected their 2010 MLRs, and resulted in increased MLRs, if the insurer were to experience a loss in profits in a future year, and therefore a reduction in its income taxes, the effect of this deduction could result in a decrease in MLRs. Regulators from several state insurance commissioners’ offices also told us that they believed the deduction of taxes and fees in the PPACA MLR formula would likely have the largest impact on MLRs reported by insurers in 2010. Most of the insurers we interviewed also said including expenses in the numerator of the PPACA formula for activities to improve health care quality contributed to changes in the 2010 MLRs compared to what they would have been under the traditional formula. However, including these expenses had less of an effect on their MLRs than the deduction of taxes and fees. One insurer estimated that the inclusion of these expenses in the PPACA MLR formula would increase their MLRs by 0.5 percentage points, but this was a fraction of the total estimated 2.0–2.5 percentage point increase in their MLR overall, which the insurer said was primarily due to the deduction of taxes and fees. Another insurer estimated that the impact of including their expenses for quality improvement activities would be less than 2 percentage points, but the deduction of taxes represented the largest component driving the increase in their 2010 MLRs. In addition, two insurers said that including quality improvement expenses would have very little impact on their PPACA MLRs. Examples of activities that improve health care quality that insurers included in their PPACA MLR were disease management programs, wellness activities, 24-hour nurse phone lines, and care coordination. Insurers that issue insurance plans in more than one state said that disaggregating MLRs by state will likely result in some variation in their MLRs across states. For example, one insurer said that a higher proportion of their premium dollars are spent on administrative expenses in one of their states because they tend to sell lower benefit plans, which they said have high administrative costs relative to premiums, in this state. While this insurer historically reported a single MLR combining data across two states, they said the disaggregation by state required for the PPACA MLR resulted in lower MLRs in the state with lower benefit plans compared to the other state. For example, MLRs in this state were 1.5 percentage points lower in the individual market and 4.5 percentage points lower in the small group market than the MLRs in the other state. Another insurer said that prior to the PPACA MLR requirements they priced insurance plans for the small group market to employers located in two states as a single market. When they calculated the PPACA MLRs separately for each state they noted variations between the two MLRs because medical costs were different in each state. All of the insurers we spoke with said that their PPACA MLRs for 2011 and beyond will be more precise than the 2010 MLRs reported to NAIC for several reasons. Because HHS’s interim final rule on PPACA MLRs was published in late 2010, insurers told us that they used their best estimates to apply the PPACA definition to experiences incurred earlier in the year. They said their PPACA MLRs for 2011 and beyond will be more precise because they will not be based on estimates and they will have a full year of data that they collected according to the new PPACA MLR categories. A regulator from one state insurance commissioner’s office described 2010 as a “test” year and said it will help insurers better prepare to report their 2011 MLRs. The regulators also agreed that the 2010 MLR data would not be a clear indicator of insurers’ expenses for quality improvement activities because insurers’ may vary in how precisely they report these expenses. In addition, some insurers told us they had never reported MLRs both by state and by insurance market prior to the PPACA MLR requirements, and were having challenges developing reasonable bases for allocating expenses across states and insurance markets for their 2010 reporting. However, they expected these issues to be resolved for 2011. For example, one insurer told us that their medical quality activities are centralized and apply to all markets, but they must now apportion their expenses for these activities by market, then to each of their insurance companies, and then by state. This insurer said that they implemented a new timekeeping system late in 2010 to better account for the time that their staff spend on these activities to address these allocation issues and expect to produce more precise data for their 2011 MLRs and beyond. Most of the insurers we interviewed also told us that their 2010 MLR data may be less precise than data reported in future years because of challenges they had in identifying and allocating health IT expenses. For example, one insurer told us that their health IT is a centralized function that is also used for other lines of insurance business, such as Medicare and Medicaid, which they said are not subject to PPACA MLR requirements. Another insurer said that determining their health IT expenses was less clear relative to the other subcategories of activities to improve health care quality in that it was hard to identify how much of their internal IT system infrastructure uniquely supported the other eligible quality activities. In addition, one insurer that only operates in a single state said that identifying expenses for health IT was challenging when factors such as facilities and employees’ salaries had to be considered. However, all of these insurers anticipated that these issues would be largely resolved when they report their 2011 PPACA MLRs. Almost all of the insurers we interviewed were reducing brokers’ commissions and making adjustments to premiums in response to the PPACA MLR requirements. These insurers said that they have decreased or plan to decrease commissions to brokers in an effort to increase their MLRs. One insurer said they started making reductions to their brokers’ commissions in the fourth quarter of 2010 for their individual and small group plans to increase their 2011 PPACA MLRs in these markets and, as a result, premiums were not as high as they otherwise would have been. This insurer said these reductions will take effect gradually because they are only being applied to new sales or when groups renew annually. Another insurer lowered commissions to their brokers in the individual market in the first quarter of 2011, such that premiums were increased less than they otherwise would have been, which they expect to result in an increase in their PPACA MLRs for 2011. In addition, one insurer said they are considering reducing premiums in 2012 partly in response to the PPACA MLR requirements and also in conjunction with a reduction in the number of in-network physicians—the combined strategy would help to lower enrollee premiums and increase their MLRs. A regulator from one state insurance commissioner’s office said that some insurers in that state have not applied for premium increases and are making adjustments to lower premiums as a strategy to increase their MLRs, and commented that reducing premiums is the best strategy for insurers to improve value for consumers. Insurers we interviewed varied on how the PPACA MLR requirements might affect their decisions on activities to improve health care quality. One insurer said that they may reduce their expenses on activities that HHS does not consider quality improvement activities in the PPACA MLR formula, such as retrospective utilization review (a review of a patient’s records after the medical treatment has occurred) and increase expenses for activities that qualify, such as prospective utilization review. Another insurer said that they are no longer focusing as much on preauthorization for inpatient admissions because this is not an eligible quality improvement activity in the PPACA MLR formula. This insurer also said the PPACA MLR requirements provide an incentive to spend more money on quality improvement activities, which will affect their decisions on implementing new activities in the future. Conversely, five other insurers told us that the PPACA MLR requirements are not a factor in decisions about their activities to improve health care quality. Insurers we interviewed also varied on how the PPACA MLR requirements may affect where they do business. For example, one large insurer that operates in multiple states said that they have exited the individual market in one state where they did not have a large market share, in part, because of the MLR requirements, and they are evaluating whether to exit this market in other states where it might be difficult to meet the PPACA MLR requirements. One for-profit insurer told us that they plan to exit or stop issuing new business in the individual market in multiple states as well as consolidating some of their insurance companies in some states in which they did not think they would meet PPACA MLR requirements. Several other insurers said that the PPACA MLR requirements will not affect decisions on where they do business. For example, one not-for-profit insurer said that serving the communities where they operate is part of their mission and, therefore, they will not be exiting any markets in the states they serve. Another insurer is considering eliminating some of their high and mid-level deductible plans, but not exiting any markets. We obtained written comments from HHS which are reprinted in appendix I. HHS commented that the PPACA MLR provision will increase transparency in the health insurance marketplace and the value consumers receive for their premium dollar. HHS also provided technical comments, which we incorporated as appropriate. Additionally, we provided a draft of this report to NAIC for comment. NAIC responded that the report was fair, factual, and helpful and provided technical comments, which we incorporated as appropriate. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of the Centers for Medicare & Medicaid Services, and appropriate congressional committees. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or at dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Gerardine Brennan, Assistant Director; George Bogart; Julianne Flowers; Drew Long; Lisa A. Lusk; Linda McIver; Jessica C. Smith; and Janet L. Sparks made key contributions to this report.
To help ensure that Americans receive value for their premium dollars, the Patient Protection and Affordable Care Act (PPACA) established minimum "medical loss ratio" (MLR) standards for health insurers. The MLR is a basic financial indicator, traditionally referring to the percentage of premiums spent on medical claims. The PPACA MLR is defined differently from the traditional MLR. Beginning in 2011, insurers must meet minimum MLR requirements or pay rebates to enrollees. While insurers' first set of data subject to the MLR requirements will be for 2011, and is not due until June 2012, insurers prepared preliminary PPACA MLR data for 2010. GAO examined: (1) what can be learned from the traditional MLR data reported by health insurers prior to PPACA; (2) what factors might affect the MLRs that insurers will report under PPACA; and (3) what changes in business practices, if any, have insurers made or planned to make in response to the PPACA MLR requirements. GAO analyzed premiums, claims, and traditional MLR data for nearly all insurers for 2006- 2009 and interviewed a judgmental sample of seven insurers--selected to provide a range based on their size, profit status, and the number of states in which they operated--about their experiences using the PPACA MLR definition. From 2006 through 2009, traditional MLRs on average generally exceeded PPACA MLR standards. This is even without the additional components in the new PPACA MLR that will generally increase MLRs. However, traditional MLRs also varied among insurers. Traditional MLRs within the individual market varied more than those within the small and large group markets, and a larger proportion of individual market insurers generally had lower MLRs. Additionally, traditional MLRs varied more among smaller insurers than among larger insurers in all three markets. Some components of the PPACA MLR requirements may mitigate the implications of some of these variations. The insurers GAO interviewed said their PPACA MLRs will be affected by changes in the MLR formula and their ability to provide more precise data in 2011 and beyond. Most of these insurers reported that the deduction of taxes and fees in the PPACA MLR formula would contribute to the largest change in their 2010 MLRs. Including expenses for activities to improve health care quality was also cited as a factor affecting insurers' MLRs but to a lesser extent. In addition, because insurers had limited time to respond to HHS's interim final rule on PPACA MLRs, which was published in late 2010, they said that their 2010 MLRs were based in part on best estimates. Insurers said they expect their ability to provide more precise PPACA MLR data will improve in 2011 and beyond. Most of the insurers GAO interviewed were reducing brokers' commissions and making adjustments to premiums, as well as making changes to other business practices, in response to the PPACA MLR requirements. Almost all of the insurers said they had decreased or planned to decrease commissions to brokers in an effort to increase their MLRs. Insurers varied on how the PPACA MLR requirements might affect their decisions to implement activities to improve health care quality. While one insurer said that their decision to implement new activities would be affected by whether or not an activity could be included as a quality improvement activity in the PPACA MLR formula, other insurers said that the PPACA MLR requirements are not a factor in such decisions. Insurers also differed on how the PPACA MLR requirement may affect where they do business. One insurer said that they have considered exiting the individual market in some states in which they did not expect to meet the PPACA MLR requirements, while several other insurers said that the PPACA MLR requirements will not affect where they do business. In commenting on a draft of this report, the Department of Health and Human Services (HHS) said that the MLR provision will increase transparency in the insurance market and value for consumers' premiums.
A reverse mortgage is a loan against the borrower’s home that the borrower does not need to repay for as long as the borrower meets certain conditions. These conditions, among others, require that borrowers live in the home, pay property taxes and homeowners’ insurance, maintain the property, and retain the title in the borrower’s name. Reverse mortgages typically are “rising debt, falling equity” loans, in which the loan balance increases and the home equity decreases over time. As the borrower receives payments from the lender, the lender adds the principal and interest to the loan balance, reducing the homeowner’s equity. This is the opposite of what happens in forward mortgages, which are characterized as “falling debt, rising equity” loans. With forward mortgages, monthly loan payments made to the lender add to the borrower’s home equity and decrease the loan balance (see fig. 1). There are two primary types of reverse mortgages, HECMs and proprietary reverse mortgages. The Housing and Community Development Act of 1987 (P.L. 100-242) authorized HUD to insure reverse mortgages and established the HECM program. According to industry officials, HECMs account for more than 90 percent of the market for reverse mortgages. Homeowners aged 62 or older with a significant amount of home equity are eligible, as long as they live in the house as the principal residence, are not delinquent on any federal debt, and live in a single-family residence. If the borrower has any remaining balance on a forward mortgage, this generally must be paid off first (typically, taken up-front from the reverse mortgage). In addition, the condition of the house must meet HUD’s minimum property standards, but a portion of the HECM can be set aside for required repairs. The borrower makes no monthly payments, and there are no income or credit requirements to qualify for the mortgage. Lenders have offered non-HECM, or proprietary, reverse mortgages in the past, but these products have largely disappeared from the marketplace due, in part, to the lack of a secondary market for these mortgages. Typically, proprietary reverse mortgages have had higher loan limits than HECMs but paid out a lower percentage of the home value to borrowers. The volume of HECMs made annually has grown from 157 loans in fiscal year 1990 to more than 112,000 loans in fiscal year 2008. The HECM program has experienced substantial growth, as the number of HECMs insured by FHA has nearly tripled since 2005 (see fig. 2). Additionally, the potential liability of loans insured by FHA has doubled in the last 2 years (see fig. 3). The potential liability is the sum of the maximum claim amounts for all active HECMs since the program’s inception. Finally, recent years have seen a rapid increase in the number of lenders participating in the HECM program (see fig. 4). However, the bulk of HECM business is concentrated among a relatively small percentage of lenders. In fiscal year 2008, roughly 80 percent of all HECMs were originated by fewer than 300 lenders, or about 10 percent of HECM lenders. Lenders can participate in the HECM market through wholesale or retail channels. Wholesale lenders fund loans originated by other entities, including mortgage brokers and loan correspondents. Retail lenders originate, underwrite, and close loans without reliance on brokers or loan correspondents. Most lenders participate in the HECM market through retail lending, although some participate through the wholesale process, and a few have both a retail and wholesale HECM business. There is a secondary market for HECMs, as most lenders prefer not to hold the loans on their balance sheets. Fannie Mae has purchased 90 percent of HECM loans and holds them in its portfolio. In 2007, Ginnie Mae developed and implemented a HECM Mortgage Backed Security product, in which Ginnie Mae-approved issuers pool and securitize a small proportion of HECMs. Fannie Mae and Ginnie Mae’s involvement in the HECM secondary market helps to provide liquidity so that lenders can continue offering HECM loans to seniors. The amount of loan funds available to the borrower is determined by several factors (see fig. 5). First, the loan amount is based on the “maximum claim amount,” which is the highest sum that HUD will pay to a lender for an insurance claim on a particular property. It is determined by the lesser of the appraised home value or the HECM loan limit. In the past year, Congress has raised the HUD loan limit for HECMs twice: HERA established for the first time a national limit for HECMs, which was set at $417,000. As a result of ARRA, the national limit was raised again to $625,500 through December 31, 2009. Prior to HERA, the loan limit for HECMs varied by location and generally were set at 95 percent of the local area median house price. Second, to manage its insurance risk, HUD limits the loan funds available to the borrower by applying a “principal limit factor” to the maximum claim amount. HUD developed a principal limit factor table using assumptions about loan termination rates—which are influenced by borrower mortality and move-out rates—and long-term house price appreciation rates, and indexed the table by (1) the borrower’s age and (2) the expected interest rate—the 10-Year Treasury rate plus the lender’s margin. The lender determines which factor to use by inputting the borrower’s current age and the current interest rate information. The older the borrower, the higher the loan amount; the greater the expected interest rate of the loan, the smaller the loan amount. Third, the funds available to the borrower are further reduced by a required servicing fee set-aside and by the up-front costs (which include a mortgage insurance premium and the origination fee), because borrowers can choose to finance them. HUD allows lenders to charge up to $35 as a monthly HECM servicing fee. The lender calculates the servicing fee set- aside by determining the total net present value of the monthly charged servicing fees that the borrower would pay between loan origination and when the borrower reaches age 100. The set-aside limits the loan funds available but is not added to the loan balance at origination. If borrowers choose to finance up-front costs as part of the loan, the loan funds available are reduced by these costs. 10 yr. Borrowers incur various costs when obtaining a HECM. HUD allows borrowers to finance both up-front and long-term costs through the loan, which means they are added to the loan balance. Origination fee: Prior to HERA, HECM borrowers were charged an origination fee equal to 2 percent of the maximum claim amount with a minimum fee of $2,000. Since the implementation of HERA, HECM borrowers are charged an origination fee calculated as 2 percent of the maximum claim amount up to $200,000 plus 1 percent of the maximum claim amount over $200,000, with a maximum fee of $6,000 and a minimum fee of $2,500. Mortgage insurance premium: Borrowers are charged an up-front mortgage insurance premium equal to 2 percent of the maximum claim amount. While the maximum claim amount is always higher than the initial amount a borrower can receive in HECM payments from the lender, FHA charges the mortgage insurance premium based on this amount because the loan balance (with accumulated interest and fees) could exceed the amount a borrower receives in payments and potentially reach the maximum claim amount. Additionally, borrowers are charged a monthly mortgage insurance premium on their loan balance at an annual rate of 0.5 percent. Interest: Borrowers are charged interest, which generally includes a base interest rate plus a fixed lender margin rate, on the loan balance. Lenders can offer HECMs with fixed, annually adjustable, or monthly adjustable base interest rates. The adjustable rates can be tied to either the 1-Year Constant Maturity Treasury Rate or 1-Year London Interbank Offered Rate Index. Most HECMs have adjustable interest rates. HECM counseling fee: The HECM program requires prospective borrowers to receive counseling to ensure an understanding of the loan. HUD allows counseling providers to charge borrowers up to $125 for HECM counseling. Loan servicing fee: Borrowers pay a monthly servicing fee of up to $35. Closing costs: HECMs also have other up-front closing costs, such as appraisal and title search fees. FHA’s insurance for HECMs protects borrowers and lenders in four ways. First, lenders can provide borrowers with higher loan amounts than they could without the insurance. Second, when the borrower is required to repay the loan to the lender, if the proceeds from the sale of the home do not cover the loan balance, FHA will pay the lender the difference. Third, if the lender is unable to make payments to the borrower, FHA will assume responsibility for making these payments. Fourth, if the loan balance reaches 98 percent of the maximum claim amount, the lender may assign the loan to FHA and FHA will continue making payments to the borrower if the borrower has remaining funds in a line of credit or still is receiving monthly payments. To cover expected insurance claims, FHA charges borrowers insurance premiums, which go into an insurance fund. HECM loans originated since the inception of the program through 2008 are supported by FHA’s General Insurance and Special Risk Insurance Fund, which includes a number of FHA mortgage insurance programs for single- family and multifamily housing and hospitals. Pursuant to HERA, FHA moved the HECM program and other insurance programs for single-family housing into FHA’s Mutual Mortgage Insurance Fund. FCRA requires federal agencies that provide loan guarantees to estimate the expected cost of programs by estimating their future performance and reporting the costs to the government in their annual budgets. Under credit reform procedures, the cost of loan guarantees, such as mortgage insurance, is the net present value of all expected cash flows, excluding administrative costs. This is known as the credit subsidy cost. For loan guarantees, cash inflows consist primarily of fees and premiums charged to insured borrowers and recoveries on assets, and cash outflows consist mostly of payments to lenders to cover the cost of claims. Annually, agencies estimate credit subsidy costs by cohort, or all the loans the agency is committing to guarantee in a given fiscal year. The credit subsidy cost can be expressed as a rate. For example, if an agency commits to guarantee loans totaling $1 million and has estimated that the present value of cash outflows will exceed the present value of cash inflows by $15,000, the estimated credit subsidy rate is 1.5 percent. When estimated cash inflows exceed estimated cash outflows, the program is said to have a negative credit subsidy rate. When estimated cash outflows exceed estimated cash inflows, the program is said to have a positive credit subsidy rate—and therefore requires appropriations. Generally, agencies are required to produce annual updates of their subsidy estimates—known as re-estimates—of each cohort based on information about the actual performance and estimated changes in future loan performance. This requirement reflects the fact that estimates of subsidy costs can change over time. Beyond changes in estimation methodology, each additional year provides more historical data on loan performance that may influence estimates of the amount and timing of future claims. Economic assumptions also can change from one year to the next, including assumptions on home prices and interest rates. FCRA recognized the difficulty of making subsidy cost estimates that mirrored actual loan performance and provides permanent and indefinite budget authority for re-estimates that reflect increased program costs. In combination, HERA’s changes to the HECM loan limit and origination fee calculation have had a positive to neutral influence on most lenders’ plans to start or continue offering HECMs. Other factors have had varying influences on lenders’ planned participation. Current economic conditions have had a moderate upward influence on lenders’ plans; however, secondary market conditions have had a downward influence on about one-third of lenders’ plans to start or continue offering HECMs. Finally, the HERA changes have not influenced most lenders’ plans to offer proprietary—non-HECM—products. HERA’s changes to the HECM program have had varying effects on HECM lenders’ planned participation in the HECM market. On the basis of questionnaire responses from a random sample of HECM lenders, we estimate that for 50 percent of lenders, the combined effect of these changes has had an upward influence on their plans to start or continue to offer HECMs (see fig. 6). For 42 percent of lenders, the combination of HERA’s changes to the origination fee and loan limits for the HECM program have had little to no influence on their plans to offer HECMs, while for 8 percent of lenders, HERA’s changes have had a downward influence. Some industry participants we interviewed stated that the changes were a good compromise that benefited borrowers by limiting the origination fee and increasing the loan limit, thereby increasing the money borrowers could receive from a HECM. Additionally, officials at NRMLA and MBA said the changes benefited lenders by making the product more attractive to individuals with higher-value homes. Taken separately, the two HERA provisions have had differing effects on lenders’ plans to offer HECMs. We estimate that for about 70 percent of lenders, HERA’s increase in HECM loan limits has had an upward influence on the likelihood of offering HECMs. The loan limit increase has had little to no influence on almost all of the remaining lenders’ plans to offer HECMs. We estimate that 86 percent of lenders expect that HERA’s creation of a single national loan limit of $417,000 will somewhat or greatly increase consumer demand for HECMs. Although the increase in the loan limit has generally had an upward influence on lenders’ plans, the change to the calculation of the origination fee has had a different effect. We estimate that changing how the fee is calculated has had a downward influence on plans to offer HECMs for 22 percent of HECM lenders, little to no influence for 65 percent of lenders, and an upward influence for 11 percent of lenders. Consistent with these views, 65 percent of lenders expect the change in origination fee to have no effect on consumer demand for HECMs. An estimated 26 percent of lenders expect the change in the origination fee to increase consumer demand, while only a few lenders expect the change to decrease consumer demand. We estimate that only 2 percent of HECM lenders do not plan to continue to offer HECMs. Of the respondents in our sample, three lenders indicated that they did not plan to continue offering HECMs. None of these were large HECM lenders, as they each originated from 40 to 160 HECMs in fiscal year 2008. Each of these lenders participated in the HECM market solely through their retail business. These three lenders varied in the amount of time that they have offered the HECM product. A representative of one lender indicated that HERA’s changes to the loan limits and origination fee had a great upward influence on the likelihood that it would offer HECMs, but nonetheless planned to discontinue offering HECMs. The other two lenders indicated that HERA and other economic factors had little to no influence on their decision to discontinue offering HECMs, and one of these lenders noted on the survey that it had discontinued offering HECMs before the enactment of the HERA. As part of our survey, we asked lenders how various economic and legislative factors influenced their plans to start or continue offering HECMs. Two factors had an upward influence on most lenders’ plans to offer HECMs in 2009. For an estimated 67 percent of HECM lenders, the implementation of the HECM for Purchase program (authorized by HERA) has had an upward influence on their plans to offer HECMs, and it has had little to no influence on almost all of the remaining lenders’ HECM origination plans. Some industry participants told us that the HECM for Purchase program likely will make HECMs attractive to a broader range of seniors. Additionally, current economic conditions have had an upward influence on the plans to offer HECMs for about 52 percent of lenders. NRMLA officials explained that seniors are seeking additional revenue because they have less available income from traditional sources, such as interest and dividend payments and retirement accounts, which is partially attributable to poor economic and financial market conditions. Additionally, two other factors have had an upward influence on some lenders’ plans to offer HECMS. For about one-third of lenders, both (1) reduced opportunities in the forward mortgage market and (2) HERA’s prohibition on the participation of non-FHA approved entities in the origination of HECMs has had a moderate or great upward influence on their plans to offer HECMs. In contrast, three factors had more of a downward influence on some lenders’ planned participation in the HECM market. First, we estimate from our survey that house price trends have had a downward influence on the HECM origination plans of 38 percent of lenders; however, house price trends had little or no influence on plans for about 50 percent of lenders. Some industry participants told us that the recent decline in house prices has prevented some seniors from obtaining a HECM either because they lack the equity in their home to qualify for the loan, or because they would not receive enough funds from the HECM to have any cash remaining after they deduct HECM fees and pay off any existing mortgage debt. Second, we estimate that the availability of secondary market options has had a downward influence on the plans of about one-third of lenders to offer HECMs. The secondary market for HECMs plays an important role in maintaining availability of loans because lenders prefer not to hold HECMs on their balance sheets. There are currently two primary options in the secondary market—Fannie Mae and Ginnie Mae. Fannie Mae officials stated that Fannie Mae bought and held more than 90 percent of HECMs in its portfolio in 2008 and was the principal secondary market purchaser of HECM loans. However, Fannie Mae’s regulator—the Federal Housing Finance Agency—recently required it to reduce the mortgage assets it holds in portfolio. Fannie Mae officials told us that as a result, they are making changes to their HECM business, which will attract other investors to the secondary market for HECMs, in order to decrease their share of the market. Recently, Fannie Mae lowered the price it pays lenders for HECMs and implemented a “live pricing” system that requires lenders to commit to the volume of HECMs they will sell to Fannie Mae. We estimate that approximately 90 percent of lenders viewed secondary market pricing requirements and the transition to live pricing as important factors in recent margin rate increases on HECMs. Fannie officials explained that as the price they pay lenders for HECMs falls, the margin rate the lenders charge the consumers generally increases. Some lenders we surveyed noted that margin rate increases stemming from pricing changes could make HECMs less attractive to borrowers because they would not be able to obtain as much cash from their HECM. Some lenders noted that live pricing complicates their relationship with borrowers because the interest rate can change between loan application and closing, which may result in the senior being able to receive less money from their HECM than originally quoted. Ginnie Mae developed and guarantees a HECM Mortgage Backed Security (HMBS) that aims to expand the availability of HECMs from multiple lenders, reduce borrowing costs, and create a broader secondary market for HECM loans. Ginnie Mae officials stated that they were poised to take on extra volume in the HECM secondary market by guaranteeing securities issued by lenders. AARP officials noted that Ginnie Mae’s HMBS product could help introduce competition into the secondary market for reverse mortgages, lowering margin rates for seniors. However, industry participants point to several issues with the Ginnie Mae product that could limit its appeal to lenders. First, Ginnie Mae requires HMBS issuers to buy back the HECM when the loan balance reaches 98 percent of the loan’s maximum claim amount. Second, issuers are required to pay interest shortfalls to investors when the loan is terminated mid-month. Some HECM lenders have noted that both of these provisions expose them to extra risk on the loan, as compared to the alternative of selling the HECM outright as they had when selling to Fannie Mae. Third, for an estimated 29 percent of lenders, HERA’s prohibition on lender-funded counseling has had a downward influence on plans to offer HECMs. Industry participants said that this prohibition is a problem for the HECM industry because counseling is required for borrowers to obtain a HECM, but borrower-paid counseling can be a deterrent for seniors who are still deciding if they want a HECM, or for those who have limited financial means to pay for counseling. In contrast to these comments, we estimate that the prohibition on lender-funded counseling had little or no influence on the plans of 60 percent of lenders. Our survey of HECM lenders asked about two other factors—HERA’s restrictions on selling other financial products in conjunction with HECMs and the current availability of wholesale lending partners—that could influence lenders’ plans to start or continue to offer HECMs. In general, these factors had little or no influence on lenders’ plans (see fig. 6). In 2008, several non-HECM reverse mortgages—referred to as jumbo or proprietary reverse mortgages—were available in the marketplace. Proprietary reverse mortgages offered loan limits that were greater than the HECM loan limit. For example, Financial Freedom, a large reverse mortgage lender, offered a product called the Cash Account Advantage Plan, which was not subject to the HECM loan limits, and in some cases provided more cash than a HECM to borrowers with higher-value homes. Based on our survey results, we estimate that approximately 43 percent of HECM lenders made non-HECM reverse mortgages in 2008. However, towards the end of 2008, almost all of the non-HECM reverse mortgage products were withdrawn from the market due to the lack of a secondary market to support them. Nonetheless, from our survey results, we estimate that 36 percent of HECM lenders plan to offer a non-HECM reverse mortgage in 2009. We estimate that HERA’s changes to the calculation of the origination fee and loan limit have had little or no influence on 68 percent of lenders’ plans to originate non-HECM reverse mortgages (see fig. 7). However, for an estimated 29 percent of HECM lenders, HERA’s change to the loan limits has had an upward influence on their plans to offer non-HECM reverse mortgages. Additionally, we estimate that for 32 percent of lenders, the implementation of the HECM for Purchase program had an upward influence on their plans to offer these loans. We estimate that current economic conditions have had an upward influence on plans to offer non-HECM reverse mortgages for 29 percent of lenders, little to no influence for 34 percent of lenders, and a downward influence for 17 percent of lenders. Our survey of HECM lenders asked about several other factors (see fig. 7) that could influence lenders’ plans to offer a non-HECM reverse mortgage product in 2009. Generally, these factors have had little or no influence on lenders’ plans. Our survey results did not indicate that secondary market conditions had a downward influence on the plans of most lenders. However, several lenders we interviewed said that while they hoped to offer a non-HECM reverse mortgage in 2009, their ability to do so would depend on the availability of funding in the secondary market. HERA’s provisions will affect borrowers in varying ways depending primarily on home value and whether HERA’s increase in loan limit will change the maximum claim amount of the loan. HERA’s changes to HECM origination fees and loan limits are likely to change the up-front costs (origination fee and up-front mortgage insurance premium) and the loan funds available for most new borrowers. Our analysis of data on borrowers who took out HECMs in 2007 shows that had the HERA provisions been in place, most borrowers would have paid less or the same amount in up-front costs, and most would have had more or the same amount of loan funds available. Additionally, about 28 percent of HECM borrowers in 2007 would have seen an increase in maximum claim amount due to HERA’s increase in loan limit, which would have meant more loan funds available for nearly all of these borrowers. Borrowers also may be affected by other consequences of the HERA provisions, such as margin rate increases and changes to funding of HECM counseling. The net effect of the HERA provisions on an individual borrower’s total up-front costs depends on house value, the local loan limit prior to HERA, and the new loan limit. HECM up-front costs consist primarily of the up- front mortgage insurance premium and the origination fee, both of which are calculated as a proportion of the maximum claim amount. Most borrowers are likely to see changes in origination fees due to HERA. Generally, those with house values greater than the prior HECM loan limit in their area will see changes in the up-front mortgage insurance premium. Borrowers fall into two categories, based on whether their maximum claim amount changes: Maximum claim amount does not change: For borrowers whose houses are valued at or less than the prior HECM loan limit in their area, the maximum claim amount does not change. Therefore, for these borrowers, the mortgage insurance premium (which is calculated based on the maximum claim amount) also does not change. However, the origination fee may change depending on the value of the house. A borrower whose house is valued at less than $125,000 should expect up to a $500 increase in the up-front costs due to the increase in the minimum origination fee from $2,000 to $2,500. A borrower whose house is valued at $125,000 to $200,000 would see no change in the up-front costs because they would pay the same 2 percent of the maximum claim amount (the same as before HERA). A borrower whose house is valued at greater than $200,000 would expect a decrease in up-front costs due to the decreased origination fee for amounts greater than $200,000 and the fee cap of $6,000. For an example, see borrower D, whose house value is $300,000, in table 1. Maximum claim amount increases: For borrowers whose maximum claim amount increases because their house values are greater than the prior local HECM loan limit, the change to up-front costs is more complex. All borrowers in this category will pay more in up-front mortgage insurance premiums because premiums are calculated based on the entire maximum claim amount. However, some borrowers may pay more in origination fees, while others will pay less. When combining these two costs, the total up-front costs could increase, decrease or remain the same. For example, borrowers A, B, and C in table 1 each own houses valued at $300,000 that are located in counties in which prior HECM loan limits varied from $200,000 to $290,000. Each borrower would see different effects in up-front costs. See appendix III for a more complete explanation of how up-front costs will change for borrowers with different characteristics. To illustrate the potential effect of the HERA provisions on borrowers, we compared the actual maximum claim amounts, up-front costs (origination fee plus the up-front insurance premium), and loan funds available for HECM borrowers in 2007 to what their maximum claim amounts, up-front costs, and loan funds available would have been had the HERA provisions been in place. Overall, we found that nearly 27 percent of borrowers would have paid more in up-front costs, 46 percent would have paid less, and 27 percent would have paid the same (see fig. 8). The amount and direction of the changes to up-front costs and loan funds available primarily depended on house value and whether a borrower would have benefited from an increase in loan limit (about 28 percent of 2007 HECM borrowers’ homes were valued at more than the prior loan limit and would have seen their maximum claim amounts increase because of HERA’s increase in the loan limit). Our analysis of up-front costs broken down by its two components is as follows: Origination fees: About 24 percent of 2007 borrowers would have paid more in origination fees, 49 percent would have paid less, and 27 percent would have paid the same amount. Increases in origination fees were due either to the $500 increase in the minimum origination fee (about 17 percent of all borrowers) or to the increased loan limits (about 6 percent of all borrowers). Borrowers who would have paid less in origination fees had maximum claim amounts greater than $200,000, which means they would have benefited from the decrease in the origination fee for the portion of the maximum claim amount greater than $200,000, the $6,000 origination fee cap, or both. Up-front mortgage insurance premium: Twenty-eight percent of 2007 HECM borrowers would have paid more in up-front mortgage insurance premiums due to increases in the loan limit, while 72 percent of borrowers would have paid the same amount, generally because the size of their loans was limited by the value of their homes and not the HECM loan limit. Changes in the loan limits and up-front fees would have affected the loan funds available to most 2007 borrowers. Borrowers whose maximum claim amount would have increased because of an increase in loan limit would have paid a higher up-front mortgage insurance premium, regardless of how much of their available loan funds they chose to access. Because this analysis assumed that HECM borrowers financed the up-front costs in the loan, any increase or decrease in the up-front costs affects the amount of loan funds that are available to them. Our analysis—which assumes that borrowers financed their up-front costs—shows that had the HERA provisions been in place at origination for 2007 HECMs, approximately 56 percent of borrowers would have had more loan funds available, 17 percent would have had less loan funds available, and 27 percent would have had the same amount available (see fig. 8). 28 percent of borrowers would have had more loan funds available, primarily due to the increase in loan limit; about 28 percent of borrowers would have had more loan funds available due solely to a decrease in their up-front fees; 17 percent of borrowers would have had a smaller amount of loan funds available due solely to an increase in their up-front fees; and 27 percent of borrowers would have experienced no change in the amount of loan funds available because their up-front fees and loan limits remained the same. Additionally, figure 8 shows the number of 2007 borrowers within the various categories and figure 9 shows the average changes in up-front costs and loan funds available for each category of borrower. Borrowers with the largest increases in their maximum claim amounts on average would have the largest percent increases in up-front costs (see fig. 9). Borrowers with no increase in their maximum claim amount, who have a change to up-front costs, will have a corresponding change in loan funds available that are equal in size but opposite in direction. For example a borrower with a $200 decrease in up-front costs will have a $200 increase in loan funds available and a borrower with a $300 increase in up-front costs will have a $300 decrease in loan funds available. Increased lender margin rates stemming from HERA’s change to the origination fee calculation could reduce loan funds available to borrowers. At loan origination, the expected interest rate HUD uses to determine the portion of the maximum claim amount that will be made available to the borrower includes the 10-year Treasury rate plus the fixed lender margin rate. Our survey of HECM lenders indicates that some lenders have raised their margin rates modestly to compensate for HERA’s limitations on the origination fee; however, we did not receive a sufficient number of responses to reliably estimate the median increase in margin rate for the population. To illustrate the impact of a modest increase in margin-rate on borrowers, we applied a 0.25 percentage point increase to borrowers who took out HECMs in 2007. We found that these borrowers would have seen a 3 percent average decrease in loan funds available as a result of the higher margin rate. A comparison of HUD data on HECMs originated within the first 3 months of HERA’s implementation with data from the same 3 months from the prior year indicates that average margin rates were higher after HERA but that the overall average HECM expected interest rates were essentially the same. This outcome resulted from declines in 10-year Treasury rates offsetting increases in lender margin rates. In addition, more borrowers, as well as prospective borrowers who ultimately do not obtain a HECM, may need to pay counseling fees. Provisions in HERA prohibit lenders from paying for this counseling but allow HUD to use a portion of HECM mortgage insurance premiums for this purpose. HUD officials said that they have not exercised this authority because the resulting reduction in premium income would affect the subsidy rate of the program adversely and potentially require appropriations. Because HUD did not implement this provision, more borrowers and prospective borrowers may need to pay counseling fees themselves. For borrowers who do eventually obtain a HECM, the fee can be financed in the loan. Prospective borrowers who do not qualify for a HECM or who choose not to proceed with the loan after counseling may have to pay for counseling out of pocket. HUD’s recent announcement that it will provide approximately $8 million in grant funds for HECM counseling in 2009 may mitigate any negative impact the HERA changes may have on seniors’ ability to obtain HECM counseling. HUD has taken or planned steps to enhance its analysis of the HECM program’s financial performance. However, HUD’s recent estimates of program costs indicate weaker performance than previously estimated, primarily due to more pessimistic assumptions about long-term house price trends. Additionally, higher loan limits enacted under HERA and the American Recovery and Reinvestment Act of 2009 (ARRA) could increase HUD’s financial risk. To estimate the cost of the HECM program, HUD uses a model to project the cash inflows (such as insurance premiums paid by borrowers) and cash outflows (such as claim payments to lenders) for all loans over their expected duration. HUD’s model is a computer-based spreadsheet that incorporates assumptions based on historical and projected data to estimate the amount and timing of insurance claims, subsequent recoveries from these claims, and premiums and fees paid by borrowers. These assumptions include estimates of house price appreciation, interest rates, average loan size, and the growth of unpaid loan balances. HUD inputs its estimated cash flows into OMB’s credit subsidy calculator, which calculates the present value of the cash flows and produces the official credit subsidy rate for a particular loan cohort. A positive credit subsidy rate means that the present value of the cohort’s expected cash outflows is greater than the inflows, and a negative credit subsidy rate means that the present value of the cohort’s expected cash inflows is greater than the outflows. To budget for a positive subsidy an agency must receive an appropriation. HUD also uses the cash flow model to annually estimate the liability for loan guarantees (LLG), which represents the net present value of future cash flows for active loans, taking into account the prior performance of those loans. HUD estimates the LLG for individual cohorts as well as for all cohorts combined. The LLG is a useful statistic because unusual fluctuations in the LLG can alert managers to financial risks that require further attention. HUD in recent years has enhanced its cash flow model for the HECM program. In 2007, the HUD Office of Inspector General’s (OIG) annual audit of FHA’s financial statements cited a material weakness in the cash flow model FHA used to generate credit subsidy estimates for the HECM program. Among other things, the audit noted technical errors in the model, significant discrepancies between projected and actual cash flows, and a lack of supporting documentation for certain modeling decisions. Partly in response to the OIG audit, HUD made a number of improvements to both the model and its supporting documentation, and in 2008 the HUD OIG eliminated the material weakness. For example, HUD improved the methodology it uses for its cash flow model. In the past, HUD used historical averages for termination and recovery rates for projecting cash flows. In 2008, HUD began to incorporate forecasts of national house price appreciation and interest rates from IHS Global Insight, an independent source for economic and financial forecasts, into its modeling. Additionally, HUD improved the way it estimates the growth of unpaid principal balances, which HUD uses to calculate the LLG. In the past, HUD used both active and terminated loans to generate this estimate. Since 2008, HUD has included only active loans to generate this estimate, which is more appropriate because the LLG represents the expected future cash flows of currently active loans. HUD also developed a master database of loan-level information to support the HECM cash flow model. Previously, HUD staff had to draw on data from multiple sources, which increased the chance of analytical errors. Finally, HUD made a number of enhancements to its documentation of estimation processes, including how macroeconomic projections are incorporated into the cash flow model. HUD plans to subject the HECM program to an annual actuarial review, which should provide additional insight into the program’s financial condition. Such a review would likely assess if program reserves and funding were sufficient to cover estimated future losses, as well as the sensitivity of this analysis to different economic and policy assumptions. Historically, the HECM program has not had a routine actuarial review because it was supported by the General Insurance and Special Risk Insurance Fund (GI/SRI) Fund, which does not have such a review requirement. However, as of fiscal year 2009, the HECM program is in the Mutual Mortgage Insurance (MMI) Fund, which is statutorily required to receive an independent actuarial review each year and includes FHA’s largest mortgage insurance program. HUD officials told us that future actuarial reviews of the MMI Fund will include a separate assessment of the HECM program. HUD also is considering producing credit subsidy re-estimates for the HECM program. As discussed later in this report, HUD has generated credit subsidy estimates for individual HECM cohorts for several years. However, HUD officials told us that, until recently, they did not have the data necessary to produce subsidy re-estimates for HECMs. Specifically, the officials noted that for HECM cohorts prior to 2009, assets for HECMs were aggregated with assets from other programs in the GI/SRI Fund and not accounted for separately. HUD officials said that they are now accounting for HECM assets separately, which will enable them to produce re-estimates for the HECM program. Re-estimates can highlight cohorts that are not expected to meet original budget estimates. This information could help inform future actions to manage HUD’s insurance risk and control program costs. HUD’s most recent estimates of two important financial indicators for the HECM program—the credit subsidy rate and the LLG—suggest weaker financial performance than previously estimated, largely due to more pessimistic house price assumptions. All other things being equal, lower house price appreciation can increase HUD’s insurance losses because it makes it less likely that the value of the home will cover the loan balance. Analyses by HUD have found that the financial performance of the HECM program is sensitive to long-term trends in house prices. HUD officials told us that HECM program performance is less sensitive to short-term price declines because borrowers with HECMs, unlike those with traditional forward mortgages, do not have an incentive to terminate (or default on) their loans when prices fall. HUD has made credit subsidy estimates for HECM cohorts from 2006 forward. Because the HECM program was relatively small prior to 2006, HUD did not produce separate subsidy estimates for the HECM program but included HECMs in its estimates of subsidy costs for the GI/SRI Fund as a whole. For the 2006 through 2009 HECM cohorts, HUD estimated negative subsidy rates ranging from - 2.82 percent in 2007 to -1.37 percent in 2009 (see fig. 10). However, for the 2010 cohort, HUD estimated a positive subsidy rate of 2.66 percent. Because HUD is expecting to insure about $30 billion in HECMs in 2010, this rate corresponds to a subsidy cost of $798 million. As required by the Federal Credit Reform Act, the President’s budget for fiscal year 2010 includes a request for this amount. HUD officials told us that the positive subsidy rate for fiscal year 2010 largely was due to incorporating more conservative assumptions about long-term house price trends than had been used for prior cohorts. For budgeting purposes, the Administration decided to use more modest appreciation rates than the private sector forecasts HUD typically uses. Specifically, the house price appreciation rates used were 0.5 percent greater than the forecasted inflation rates. HUD officials told us that if they had used IHS Global Insight projections to develop the fiscal year 2010 credit subsidy estimate, there would be no need for an appropriation because the credit subsidy rate would be negative. HUD also has estimated the LLG for the HECM program since 2006. As shown in figure 11, HUD’s original LLG estimates grew substantially from 2007 to 2008, increasing from $326 million to $1.52 billion. According to FHA’s financial statements for fiscal years 2007 and 2008, the increase was primarily due to the lower house price appreciation projections used in the 2008 analysis. The report noted that lower appreciation rates result in lower recoveries on mortgages assigned to HUD, which in turn increases HUD’s liability. In September 2008, HUD analyzed the sensitivity of the 2008 LLG estimate for the HECM program as a whole to different assumptions, including alternative house price scenarios. HUD examined the impact of house price appreciation that was 10 percent higher and 10 percent lower than the baseline assumptions from IHS Global Insight for fiscal years 2009 through 2013. (For example, for a baseline assumption of 4 percent house price appreciation, the lower and higher scenarios would have been 3.6 percent and 4.4 percent, respectively.) HUD estimated that the more pessimistic assumption increased the LLG from $1.52 billion to $1.78 billion, while the more optimistic assumption reduced the LLG to $1.27 billion. When estimating future costs for all HECMS, HUD assumes that the property value at loan origination is equal to the maximum claim amount. For loans in which the property value is more than the HECM loan limit, this approach results in a conservative assumption about the amount of home equity available at the end of the loan to cover the loan balance. In these cases, the actual home value at the end of the loan is likely to be more than what HUD assumes and therefore more likely to exceed the loan balance at the end of the loan. According to HUD, because of this conservative approach to estimating costs, the HECM program does not rely on loans with property values that exceed the maximum claim amount to operate on a break-even basis over the long-run. Higher loan limits enacted under HERA and ARRA may make HUD’s approach less conservative by reducing the proportion of loans for which the property value exceeds the maximum claim amount. This scenario is especially likely in locations that previously had relatively low local loan limits (reflecting their lower home values) but are now subject to the higher national limit. To illustrate, consider a 65-year-old HECM borrower with a $400,000 home whose loan limit prior to HERA was $250,000 (see fig. 12). In this scenario, the maximum claim amount would be the same as the loan limit because the maximum claim amount is defined as the lesser of the loan limit or the home value. However, if the loan limit for the same borrower is increased to the HERA-authorized level of $417,000, the maximum claim amount is the same as the home value ($400,000). As figure 12 shows, when a borrower’s maximum claim amount is capped by the loan limit, the maximum claim amount can be substantially lower than the value of the home. All other things being equal, the potential for losses is low in this scenario because the projected loan balance is likely to remain less than the projected home value after the lender assigns the loan to HUD. In contrast, when the maximum claim amount is capped by the home’s value, the difference between the projected loan balance and the projected home value is smaller. The potential for losses is higher with such a loan because the projected loan balance is more likely to exceed the projected home value. As also shown in figure 12, when this effect is combined with declining home prices, the potential for losses increases. Studies by HUD and others have noted that HECM loans for which the home value exceeds the maximum claim amount have a positive impact on the program’s financial performance but also have noted the potential negative impact of raising the loan limit. When the HECM program started in 1990, HUD developed a statistical model to estimate borrower payments and insurance risk. HUD’s technical explanation of the model acknowledges that future expected losses are smaller for HECMs with a maximum claim amount capped by the loan limit, as compared with HECMs with a maximum claim amount equal to the home value. Similarly, actuarial reviews of the HECM program—conducted in 1995, 2000, and 2003—concluded that the negative net liability of the HECM program resulted from homes valued at more than the HECM loan limit cross-subsidizing those valued at less than the limit. The 2003 actuarial review also examined how the financial condition of the HECM program would have been affected had a higher, national loan limit been in place when existing HECMs were originated. The analysis found that the higher loan limits would have reduced the expected net liability of the HECM program from -$54.0 million to -$11.4 million. This finding is consistent with a Congressional Budget Office (CBO) analysis of a 2007 legislative proposal to increase the HECM loan limit to $417,000 nationwide. CBO concluded that the increase would reduce HUD’s credit subsidy rate for the 2008 cohort of loans from -1.9 percent to -1.35 percent. The percentage of HECMs with maximum claim amounts capped by the loan limit has declined in recent years (see fig. 13). Since the inception of the program, this percentage has ranged from 24 percent to 47 percent. However, this proportion has declined in recent years, dropping from 42 percent in fiscal year 2006 to 25 percent in fiscal year 2008. Furthermore, HUD data show that this proportion dropped to 18 percent for the first 4 months of fiscal year 2009, likely due in part to the higher loan limit. HUD officials acknowledged that a reduction in the proportion of loans with maximum claim amounts capped by the loan limit could have a negative effect on the program’s financial performance. However, they also indicated that their conservative approach to estimating program costs mitigates the associated risks. We provided a draft of this report to HUD for its review and comment. In comments provided to us in an e-mail, HUD concurred with our report and provided a technical comment, which we incorporated into the report. We are sending copies of this report to interested congressional parties, the Secretary of the Department of Housing and Urban Development, and other interested parties. In addition, the report will be available at no charge on our Web site at http://www.gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. If you or your staff has any questions about this report, please contact me at (202) 512-8678 or sciremj@gao.gov. GAO contact information and staff acknowledgments are listed in appendix IV. Our objectives were to examine (1) how the Housing and Economic Recovery Act of 2008 (HERA) changes to the Home Equity Conversion Mortgage (HECM) program and other factors have affected HECM lenders’ planned participation in the reverse mortgage market, (2) the extent to which HERA’s changes to HECM origination fees and loan limits will affect costs to borrowers and the loan amounts available to them, and (3) Department of Housing and Urban Development’s (HUD) actions to evaluate the financial performance of the HECM program, including the potential impact of loan limit and house price changes. To address these objectives, we reviewed laws, regulations and guidance relevant to the HECM program, including provisions in HERA, the American Recovery and Reinvestment Act of 2009 (ARRA), and HUD handbooks and mortgagee letters. We also spoke with agency, industry, and nonprofit officials, including those at HUD, Ginnie Mae, Fannie Mae, the National Reverse Mortgage Lenders Association (NRMLA), the Mortgage Bankers Association (MBA), and AARP. To determine how HERA’s provisions have affected lenders’ planned participation in the reverse mortgage market, we spoke with industry and nonprofit officials—including those at Ginnie Mae, Fannie Mae, AARP, NRMLA, and MBA—to understand how recent legislative and economic changes were affecting the industry. To more specifically identify the influence of legislation and economic factors on HECM lenders, we conducted a Web-based survey of a random probability sample of the 2,779 lenders that originated HECMs on a retail basis in fiscal year 2008. We used HUD records of HECM-certified lenders making at least one such loan in fiscal year 2008, and supplemented HUD’s loan company officer contact information with names and e-mail addresses of officers at those lenders in our sample who also had memberships in NRMLA. For the remaining sampled lenders for which we lacked contact information, we made telephone calls to identify the most appropriate recipient for our survey invitation. We drew a stratified sample, allocating our selections across three groups defined by the number of HECMs made in fiscal year 2008, sampling from the groups with larger lenders at a higher rate than from the groups with smaller lenders (see table 2). We sampled all 51 members of the stratum with the largest lenders (300 or more loans). We sampled so few (30) and received so few usable responses (8) from the stratum with the smallest lenders (1 to 9 loans), that we considered this a nongeneralizable sample and excluded it from our quantitative analysis. In addition, lenders in the smallest lender stratum account for less than 5 percent of all loans, and thus would not influence overall estimates very much. Responses from the smallest lenders stratum were used only as case study examples in our analysis. To help develop our questionnaire, we consulted with an expert at NRMLA. We pretested our draft questionnaire to officials at three HECM lenders in our population and made revisions to it before finalization. Legal and survey research specialists in GAO also reviewed the questionnaire. Before the survey, in early March 2009, NRMLA sent letters to those lenders in our sample who were also members in that organization, endorsing our survey and encouraging response. In March 2009, we sent e- mails with links to our Web questionnaire and unique login information to each member of our sample with valid e-mail addresses. For sampled companies for which we were unable to obtain working e-mail addresses, we mailed paper versions of the questionnaires. Nonresponding lenders were sent additional e-mails or copies of questionnaires from March through May. We also made telephone calls in April to nonrespondents encouraging them to respond. Our survey closed in early May 2009. We received a total of 180 usable responses, for an overall response rate of 57 percent. The “weighted” response rate for the survey, which takes into account the relative numbers of lenders in the population that sampled lenders in each of our three size strata had to represent, was 53 percent. The most common reason for ineligibility among our sample firms was closure, merger, or other discontinuation of business in the reverse mortgage industry. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 10 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. Unless otherwise noted, our estimates have margins of error of plus or minus 10 percentage points or less at the 95 percent confidence interval. In addition to sampling error, the practical difficulties of conducting any survey may introduce other errors: 1. Nonresponse—bias from failing to get reports from lenders whose answers would have differed significantly from those who did participate. 2. Coverage—failure to include all eligible HECM lenders in the list from which we sampled, or including ineligible firms. 3. Measurement—errors in response. 4. Data processing. We took steps in developing the questionnaire, collecting the data, and analyzing them to minimize such errors. For example, our pretesting and expert reviews of the questionnaire resulted in question changes that reduced the possibility of measurement error, and all data processing and analysis programming was verified by independent analysts. In addition, we followed up on some unlikely answers by recontacting sampled lenders or conducting followup research on them to edit erroneous answers and declare some firms ineligible for our survey, thereby reducing measurement and coverage error. To assess the risk of nonresponse bias, we compared the response rates of lenders across categories of two characteristics that might be related to our key variables—the effect of HERA changes and other factors on the likelihood of continuation of HECM lending in the future. The two characteristics known for both respondents and nonrespondents were the number of years the lender had been offering HECMs and the state in which the lender’s home office is located, from which we could develop a measure of size of loan activity in each state by summing the number of loans made by lenders whose home offices were in a given state. We found no statistically significant association between these two characteristics and the likelihood of response. Although this does not eliminate the possibility of nonresponse bias, we found no evidence of bias based on our analysis of this available data. To determine the effect of the HERA provisions on HECM borrowers, we examined changes in the up-front mortgage insurance premium, origination fee, and loan funds available to borrowers. The up-front mortgage insurance premium is 2 percent of the maximum claim amount. HERA did not change this rate, but because of HERA’s change to the HECM loan limit, some borrowers may be eligible for larger loans and therefore have higher maximum claim amounts. Since the premium is calculated based on the maximum claim amount, these borrowers will pay a higher up-front mortgage insurance premium than they would have prior to HERA. Before HERA, the origination fee was calculated as 2 percent of the maximum claim amount with a minimum fee of $2,000. HERA changed the calculation of the origination fee to 2 percent of the first $200,000 of the maximum claim amount plus 1 percent of the maximum claim amount over $200,000, with a maximum fee of $6,000. In implementing HERA, HUD also increased the minimum origination fee by $500 to $2,500. We used two different approaches to assess the impact of the HERA changes. First, we performed a mathematical analysis showing the difference between the up-front costs before and after HERA. Specifically, we derived equations for calculating pre-HERA and post-HERA up-front costs for borrowers with maximum claim amounts in different ranges ($0 to $100,000; $100,000 to $125,000; $125,000 to $200,000; $200,000 to $400,000; and $400,000 to $625,500). For each range, we subtracted the pre-HERA equation from the post-HERA equation to derive an equation for calculating the change in up-front costs due to the HERA provisions. We then used these equations to calculate the potential change in up-front costs in dollars terms. We did this analysis separately for cases in which the maximum claim amount would increase under HERA and cases in which the maximum claim amount would remain the same. Appendix III shows the details of this analysis. Second, we applied the HERA changes to HUD loan-level data for HECMs that borrowers obtained in calendar year 2007. We compared the results to the actual up-front costs and loan funds available for these borrowers. To perform this analysis, we obtained data from HUD’s Single-family Data Warehouse. We assessed the reliability of these data by (1) reviewing existing information about the data and the system that produced them, (2) interviewing HUD officials knowledgeable about the data, and (3) performing electronic testing of required data elements. We determined that the data we used were sufficiently reliable for the purposes of this report. As shown in table 3, the universe of 2007 HECMs used in our analysis included 101,480 loans. We applied the $417,000 national loan limit and HERA’s changes to the origination fee calculation to the 2007 HECMs. For each borrower, we calculated the new maximum claim amount, origination fee, up-front mortgages insurance premium, and loan funds available under the HERA rules and compared our results to the actual 2007 values. We summarized our results by calculating the average changes in these amounts. To illustrate the potential effect of modest margin rates increases stemming from HERA’s change to the origination fee calculation, we applied a 0.25 percentage point increase to the margin rate for the 2007 HECMs adjusted to reflect the HERA provisions. We determined the resulting changes in the loan funds available to borrowers using HUD’s table of principal limit factors. To provide perspective on the HERA- related margin rate changes, we compared margin rates from a 3 month period 1 year prior to the implementation of HERA (November 2007 through January 2008) to the margin rates from the 3 month period after the implementation of HERA (November 2008 through January 2009). To examine HUD’s actions to evaluate the financial performance of the HECM program, we reviewed HUD’s budget estimates for the HECM program for fiscal years 2005 through 2010. We also compiled and analyzed financial performance information about the HECM program, including the liability for loan guarantee (LLG) and credit subsidy estimates. For example, we examined the Federal Housing Administration’s (FHA) Annual Management Reports (2005, 2006, 2007, and 2008), which include FHA’s annual financial statements; HUD Office of the Inspector General (OIG) audits of FHA’s financial statements (2005, 2006, 2007, and 2008); actuarial reviews of the HECM program (1995, 2000, and 2003); and Congressional Budget Office cost estimates relevant to the HECM program. We also reviewed other analyses HUD has conducted of program costs, such as the sensitivity of estimated cash flows to alternative economic assumptions. We interviewed FHA officials about their budget estimates and program analyses. Additionally, we reviewed information about HUD’s HECM cash flow model, including a technical explanation of the model published in 1990 and recent changes to the model. We also reviewed historical house price appreciation rates from the Federal Housing Finance Agency and projected house price appreciation rates from IHS Global Insight. To examine the percentage of HECMs with maximum claim amounts capped by the loan limit, we analyzed loan-level data on HECMs from HUD’s Single-family Data Warehouse. As noted earlier, we determined that the data we used were sufficiently reliable for this analysis. In addition, we reviewed federal agency standards for managing credit programs, such as those contained in the Federal Credit Reform Act (FCRA), related Office of Management and Budget requirements and instructions, and Federal Accounting Standards Advisory Board guidance. Finally, we interviewed HUD OIG officials, industry participants, and mortgage market analysts. We conducted this performance audit from September 2008 through July 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The American Recovery and Reinvestment Act (ARRA) raised the national loan limit for Home Equity Conversion Mortgages (HECM) to $625,500 through December 31, 2009. In our survey of HECM lenders, we asked lenders about the influence the increased loan limit would have on their likelihood to offer HECMs and non-HECM reverse mortgages (see fig. 14). Additionally, we asked how they expected consumer demand for HECMs to increase as a result of the ARRA loan increase (see fig. 15). See figures 14 and 15 for survey questions and estimates based on our survey results. Home Equity Conversion Mortgage (HECM) borrowers may experience changes in up-front costs due to the Housing and Economic Recovery Act of 2008’s (HERA) change to the calculation of the origination fee, the loan limit, or both. Generally, borrowers with house values greater than the prior HECM loan limit will be able to borrow more under HERA’s higher loan limit, while borrowers with a wide range of house values may be affected by the changes in origination fees. There are two up-front costs. The first—the up-front mortgage insurance premium—is 2 percent of the maximum claim amount. The second—the origination fee—was calculated before HERA as 2 percent of the maximum claim amount with a minimum fee of $2,000. HERA changed the calculation of the origination fee to 2 percent of the first $200,000 of the maximum claim amount plus 1 percent of the maximum claim amount over $200,000, with a maximum fee of $6,000. In implementing HERA, HUD also increased the minimum origination fee by $500 to $2,500. To determine how borrowers would be affected by these changes, we developed mathematical equations for calculating the up-front costs under both the HERA and pre-HERA rules. We subtracted the equation for the pre-HERA rules from the equation for the HERA rules to derive an equation for the change in up-front costs resulting from HERA. A positive value indicates that a borrower would pay more under HERA, and a negative value indicates that a borrower would pay less. Figures 16 and 17 illustrate how these changes affect different categories of borrowers. Figure 16 shows the results for borrowers who have home values lower than the previous loan limit. The maximum claim amount is not affected by HERA’s change in loan limit. Therefore, for these borrowers, changes in up-front costs derive only from changes in the origination fee. Figure 17 shows the results of the calculation for borrowers who were affected by HERA’s increase in loan limit. These borrowers would pay up- front mortgage insurance premiums and origination fees based on a higher maximum claim amount. However, depending on the maximum claim amount, the origination fee may have decreased rather than increased. The net change in up-front costs for this grouping is therefore indeterminable without knowing the old and new maximum claim amounts. In addition to the individual named above, Steve Westley, Assistant Director; Anne Akin, Kathleen Boggs, Joanna Chan, Rudy Chatlos, Karen Jarzynka, John McGrail, Marc Molino, Mark Ramage, Carl Ramirez, Barbara Roesmann, and Jennifer Schwartz made key contributions to this report.
Reverse mortgages--a type of loan against home equity available to seniors--are growing in popularity. A large majority of reverse mortgages are insured by the Department of Housing and Urban Development (HUD) under its Home Equity Conversion Mortgage (HECM) program. The Housing and Economic Recovery Act of 2008 (HERA) made several modifications to the HECM program, including changes in how origination fees are calculated and an increase in the loan limit. The Act directed GAO to examine (1) how these changes have affected lenders' plans to offer reverse mortgages, (2) how the changes will affect borrowers, and (3) actions HUD has taken to evaluate the financial performance of the HECM program. To address these objectives, GAO surveyed a representative sample of HECM lenders, analyzed loan-level HECM data, and reviewed HUD estimates and analysis of HECM program costs. On the basis of a survey of HECM lenders, GAO estimates thattaken together, HERA's changes to the HECM loan limit and origination fee calculation have had a positive to neutral influence on most lenders' plans to offer HECMs. Other factors, such as economic and secondary market conditions, have had a mixed influence. Although economic conditions have had a positive influence on about half of lenders' plans to offer HECMS, secondary market conditions have negatively influenced about one-third of lenders. GAO also estimates that the HERA changes have had little to no influence on most lenders' plans to offer non-HECM reverse mortgages. HERA's provisions will affect borrowers in varying ways depending on home value and other factors. The changes to HECM origination fees and loan limits are likely to change the up-front costs and the loan funds available for most new borrowers. GAO's analysis of data on HECM borrowers from 2007 shows that if the HERA changes had been in place at the time, most would have paid less or the same amount in up-front costs, and most would have had more or the same amount of loan funds available. For example, about 46 percent of borrowers would have seen a decrease in up-front costs and an increase in available loan funds. However, 17 percent of borrowers would have seen an increase in up-front costs and a decrease in available loan funds. HUD has enhanced its analysis of HECM program costs, but less favorable house price trends and loan limit increases have increased HUD's risk of losses. HUD has updated its cash flow model for the program and plans to conduct annual actuarial reviews. Although the program historically has not required a subsidy, HUD has estimated that HECMs made in 2010 will require a subsidy of $798 million, largely due to more pessimistic assumptions about long-run home prices. In addition, the higher loan limit enacted by HERA may increase the potential for losses. To calculate the amount of funds available to a borrower, lenders start with a limiting factor of either the home value or, if the home value is greater than the HECM loan limit, with the loan limit. For loans that are limited by the home value, the loan amount and the home value are closer together at the point of origination, which makes it more likely that the loan balance could exceed the home value at the end of the loan. In contrast, for loans that are limited by the HECM loan limit, there is initially a greater difference between the home value and the loan amount, making it less likely that the loan balance will exceed the home value at the end of the loan. The increase in the HECM loan limit may increase HUD's risk of losses by reducing the proportion of loans that are limited by the HECM loan limit.
DOD defines a UAS as a system whose components include the necessary equipment, networks, and personnel to control an unmanned aircraft— that is, an aircraft that does not carry a human operator and is capable of flight under remote control or autonomous programming. Battlefield commanders have experienced a high level of mission success in ongoing operations with capabilities provided by UAS. Beyond a traditional intelligence, surveillance, and reconnaissance role, UAS have been outfitted with missiles to strike targets, with equipment to designate targets for manned aircraft by laser, and with sensors to locate the positions of improvised explosive devices and fleeing insurgents, among other tasks. DOD has acquired UAS through formal acquisition programs, and in certain cases, the military services have purchased common UAS components. For example, the Army and the Marine Corps are purchasing the Shadow UAS and the Air Force and the Navy are acquiring a similar unmanned aircraft for the Global Hawk and the Broad Area Maritime Surveillance UAS programs. DOD has also fielded other UAS in order to meet urgent warfighter requests and for technology demonstrations. In 2008, U.S. Joint Forces Command’s Joint UAS Center of Excellence established a system to categorize UAS in groups that are based on attributes of vehicle airspeed, weight, and operating altitude. For example, group 1 UAS weigh 20 pounds or less whereas group 5 UAS weigh more than 1,320 pounds. Table 1 provides the military services’ inventories of groups 3, 4, and 5 unmanned aircraft as of October 2009. Several major systems—including the Air Force Predator, Reaper, and Global Hawk; the Army and Marine Corps Shadow; and the Army Extended Range Multi-Purpose (ERMP) UAS—have been deployed and used successfully in combat. Because of the resulting demand for these assets, several of the military services’ UAS programs have experienced significant growth. For example, DOD’s fiscal year 2010 budget request sought funds to continue to increase the Air Force’s Predator and Reaper UAS programs to 50 combat air patrols by fiscal year 2011—an increase of nearly 300 percent since fiscal year 2007. DOD’s fiscal year 2007 through fiscal year 2010 budget requests for all of DOD’s UAS programs reflect an increase in the amount of funding requested by DOD for UAS investments to support warfighting needs, as shown in table 2. Beyond development and acquisition costs, DOD’s UAS programs have additional funding requirements, for example, those costs to operate and sustain the weapon system, to provide personnel, and to construct facilities and other infrastructure. DOD guidance encourages acquisition personnel to consider factors, including personnel, facilities, supporting infrastructure, and policy costs, when fielding new capabilities. However, DOD’s and our prior work have found that decision makers have had limited visibility over total weapon system costs because estimates have not reflected a full accounting of life cycle costs. In a November 2009 report, for example, DOD concluded that its acquisition processes pay too little attention to weapon system support costs, even though the department spends more than $132 billion each year to sustain its weapon systems. The report also concluded that the lack of adequate visibility of operating and support costs has been a long-standing barrier to effectively assessing, managing, and validating the benefits or shortcomings of support strategies. In our prior work, we have found that DOD often makes inaccurate funding commitments to weapon system programs based on unrealistic cost estimates. The foundation of an accurate funding commitment should be a realistic cost estimate that allows decision makers to compare the relative value of one program to another and to make adjustments accordingly. We reported that DOD’s unrealistic cost estimates were largely the result of a lack of knowledge, failure to adequately account for risk and uncertainty, and overly optimistic assumptions about the time and resources needed to develop weapon systems. By repeatedly relying on unrealistically low cost estimates, DOD has initiated more weapon systems programs than its budget can afford. We have also conducted an extensive body of work on DOD’s efforts to ensure the availability of defense critical infrastructure, which includes space, intelligence, and global communications assets, reporting on DOD’s progress in addressing the evolving management framework for the Defense Critical Infrastructure Program, coordination among program stakeholders, implementation of key program elements, the availability of public works infrastructure, and reliability issues in DOD’s lists of critical assets, among other issues. For example, we reported in 2008 on the challenges that the Air Force faced in addressing the continuity of operations and physical security at Creech Air Force Base, a location where nearly half of the Air Force’s UAS operations were being performed at the time. While many of DOD’s UAS operations currently take place outside of the United States, primarily in Iraq and Afghanistan, the military services require access to the national airspace system to conduct UAS training, among other reasons, and personnel and equipment to support training exercises. However, DOD has experienced several challenges in gaining access to the national airspace system and limitations in the availability of UAS personnel and equipment to support training because of operational commitments. Because DOD’s UAS do not meet several federally mandated requirements for routine access to the national airspace system, most types of UAS may not perform routine flight activities, such as taking off and landing outside DOD-managed airspace. For example, UAS do not have personnel or a suitable alternative technology on board the aircraft to detect, sense, and avoid collision with other aircraft. The Federal Aviation Administration approves applications from DOD (and other government agencies) for authority to operate UAS in the national airspace system outside of that restricted for DOD’s use on a case-by-case basis. To provide military personnel with information on UAS, DOD components, which include the military services and other defense organizations, have produced several publications, including joint and service doctrinal publications that describe processes to plan for and integrate UAS into combat operations. In addition, DOD components have produced concepts of operations for UAS, as well as multiservice and platform-specific tactics, techniques, and procedures manuals. These publications are intended to provide planners at operational and tactical levels of command, such as joint task forces and divisions, with an understanding of the processes to incorporate UAS into their intelligence collection plans and into combat operations. Tactical ground units requesting support from UAS, which can range from small special operations units to large infantry brigades engaged in ground combat operations, may use these documents to understand UAS capabilities and how to best incorporate them into preplanned and dynamic missions. UAS operators use these documents to establish best practices, standard operating procedures for integrating UAS into joint operations, and processes for interacting with other air and ground forces on the battlefield. Periodically, DOD components update these publications to include new knowledge on military practices and capabilities. Generally, these updates are accomplished through comprehensive service- or departmentwide reviews conducted by subject matter experts. DOD has policies that encourage its components to plan for factors, including personnel, facilities, and communications infrastructure, that are needed to support weapon systems programs. Extensive planning for these factors provides decision makers with complete information on total program costs and assurances that weapon system programs can be fully supported in the long term. During our review, however, we identified areas where, despite the growth in UAS inventories, comprehensive plans for personnel, facilities, and some communications infrastructure have not been fully developed to support Air Force and Army UAS programs. DOD guidance recommends that acquisition personnel determine a weapon system program’s life cycle costs by conducting planning for the manpower, facilities, and other supporting infrastructure, among other factors, needed to support a weapon system, and fully fund the program and manpower needed in budget requests. Decision makers use this information to determine whether a new program is affordable and the program’s projected funding and manpower requirements are achievable. DOD components are expected to conduct continuing reviews of their strategies to sustain weapon systems programs and to identify deficiencies in these strategies, making necessary adjustments to them in order to meet performance requirements. In addition, the Office of Management and Budget’s Capital Programming Guide also indicates that part of conducting cost analyses for capital assets, such as weapon systems, is refining cost estimates as programs mature and as requirements change, and incorporating risk analyses in these estimates. We have reported that accurate cost estimates are necessary for government acquisition programs for many reasons, for example, to evaluate resource requirements, to support decisions about funding one program over another, and to develop annual budget requests. Moreover, having a realistic estimate of projected costs makes for effective resource allocations, and it increases the probability of a program’s success. The Air Force and the Army train personnel to perform functions for UAS operations, such as operating the aircraft and performing maintenance. Because of the rapid growth of UAS programs, the number of personnel required to perform these functions has substantially increased and the services have taken steps to train additional personnel. However, in service-level UAS vision statements, the Air Force and the Army have identified limitations in their approaches to provide personnel for UAS operations, but they have not yet fully developed strategies that specify the actions and resources required to supply the personnel needed to meet current and projected future UAS force levels. The Air Force, for example, has identified limitations in the approaches it has used to supply pilots to support the expanded Predator and Reaper UAS programs. Since the beginning of these programs, the Air Force has temporarily reassigned experienced pilots to operate UAS, and more recently, it began assigning pilots to operate UAS immediately after they completed undergraduate pilot training. Air Force officials stated that this initiative is intended to provide an additional 100 pilots per year on a temporary basis to support the expanding UAS programs. While the Air Force has relied on these approaches to meet the near-term increase in demand for UAS pilots, officials told us that it would be difficult to continue these practices in the long term without affecting the readiness of other Air Force weapon systems, since the pilots who are performing UAS operations on temporary assignments are also needed to operate other manned aircraft and perform other duties. In an attempt to develop a long-term, sustainable career path for UAS pilots, the Air Force implemented a new initiative in 2009 to test the feasibility of establishing a unique training pipeline for UAS pilots. Students selected for this pipeline are chosen from the broader Air Force officer corps and are not graduates of pilot training. At the time of our work, the Air Force was analyzing the operational effectiveness of those personnel who graduated from the initial class of the test training pipeline to determine if this approach could meet the long-term needs of the Air Force. In addition, officials told us that the Air Force would ultimately need to make some changes to this pipeline to capture lessons learned from the initial training classes and to help ensure that graduates were effectively fulfilling UAS mission requirements. For example, officials stated that the initial graduates of the training pipeline have not yet been provided with training on how to take off and land the Predator and that these functions are being performed by more experienced pilots. However, the Air Force had neither fully determined the total training these personnel would require to effectively operate the Predator and Reaper aircraft during UAS missions nor fully determined the costs that would be incurred to provide training for these assignments. Officials estimated that it would take at least 6 months after the second class of personnel graduated from the training pipeline to assess their effectiveness during combat missions and to determine what, if any, additional training these personnel require. Further, the Air Force has not finalized an approach to supply the personnel needed to perform maintenance functions on the growing UAS inventories and meet servicewide goals to replace contractor maintenance positions with funded military ones. Currently, the Air Force relies on contractors to perform a considerable portion of UAS maintenance because the Air Force does not have military personnel trained and available to perform this function. For example, contractors perform approximately 75 percent of organization-level maintenance requirements for the Air Combat Command’s Predator and Reaper UAS. According to the Air Force’s UAS Flight Plan, replacing contractor maintenance personnel with military personnel would enable the Air Force to develop a robust training pipeline and to build a sustainable career field for UAS maintenance, while potentially reducing maintenance costs. According to officials with whom we spoke, the Air Force’s goal is to establish a training pipeline for military maintenance personnel by fiscal year 2012. However, the Air Force has not developed a servicewide plan that identifies the number of personnel to be trained, the specific training required, and the resources necessary to establish a dedicated UAS training pipeline. Officials estimated that it could take until fiscal year 2011 to determine these requirements and to test the feasibility of a new training pipeline. Our review also found that the Army’s personnel authorizations are insufficient to fully support UAS operations. For example, according to officials, the Army has determined on at least three separate occasions since 2006 that Shadow UAS platoons did not have adequate personnel to support the near-term and projected pace of operations. Officials from seven Army Shadow platoons in the United States and in Iraq with whom we spoke told us that approved personnel levels for these platoons did not provide an adequate number of vehicle operators and maintenance soldiers to support continuous UAS operations. Army officials told us that currently approved personnel levels for the Shadow platoons were based on planning factors that assumed that the Shadow would operate 12 hours per day with the ability to extend operations to up to 16 hours for a limited period of time. However, personnel with these platoons told us that UAS in Iraq routinely operated 24 hours per day for extended periods of time. Army officials also told us that organizations, such as combat brigades and divisions, require additional personnel to provide UAS expertise to assist commanders in optimizing the integration of UAS into operations and safely employing these assets. Despite the shortfalls experienced during ongoing operations, the Army has yet to formally increase personnel authorizations to support UAS operations or approve a servicewide plan to provide additional personnel. Officials told us that on the basis of these and other operational experiences, the Army was in the process of developing initiatives to provide additional personnel to Army organizations to address personnel shortfalls, and included these initiatives in an October 2009 UAS vision statement developed by the Army’s UAS Center of Excellence. These initiatives include increasing authorized personnel levels for vehicle operators and maintenance soldiers in Shadow UAS platoons as well as other initiatives to assign UAS warrant officers and Shadow vehicle operators to brigade and division staffs. According to the Army’s UAS vision statement, the initiatives to increase UAS personnel to meet current and projected requirements will be completed by 2014. However, at the time of our work, the Army had not developed a detailed action plan that identified the number of additional personnel that would support UAS operations and the steps it planned to take in order to synchronize the funding and manpower necessary to provide these personnel, such as reallocating existing manpower positions within combat brigades to increase the size of Shadow platoons. Although DOD has requested funding to some extent in recent budget requests and expects to request additional funds in future years, the Air Force and the Army have not fully determined the specific number and type of facilities needed to support UAS training and operations. For example: The Air Force has neither determined the total number of facilities required to support its rapidly expanding Predator and Reaper programs nor finalized the criteria it will use to renovate existing facilities because decisions regarding the size of UAS squadrons and the locations where these squadrons will be based had not been finalized. In some cases, the Air Force has constructed new facilities to support UAS operations. In other cases, the Air Force determined that it did not need to construct new facilities and is instead renovating existing facilities on UAS operating locations, such as maintenance hangars and buildings to use for unit operations facilities. However, until the Air Force determines where it plans to locate all of its new UAS units and finalizes the criteria that would be used to guide the construction or renovation of facilities, the Air Force will be unable to develop realistic estimates of total UAS facility costs and long-term plans for their construction. The Army has begun to field the ERMP UAS and has determined that the Army installations where the system will be stationed require facilities uniquely configured to support training and operations. These facilities include a runway, a maintenance hangar, and a unit operations facility. However, the Army has not fully determined where it will base each of these systems and it has not completed assessments at each location to evaluate existing facilities that could potentially be used to meet the ERMP requirements and to determine the number of new facilities that the Army needs to construct. The lack of detailed facility planning has affected the Army’s fielding schedule for the ERMP. Army officials told us that the fielding plan for this system has been adjusted to give priority to locations that do not require significant construction. According to Army officials, initially the Army had developed its fielding plan for the ERMP so that the plan for fielding the system synchronized with the estimated deployment dates for units supporting ongoing contingency operations. The Army has not definitively determined, for the Shadow UAS, the type and number of facilities needed to support training and aircraft storage. In 2008, the Army established a policy that directed its ground units to store Shadow aircraft in facilities with other ground unit tactical equipment and not in facilities uniquely configured for these aircraft. Ground units typically store equipment in facilities, such as motor pools, that are not always near training ranges. Previously, the Army had allowed some units to construct unique facilities for the Shadow nearby installation ranges to facilitate their ability to conduct training. Army officials told us that storing equipment within the motor pool creates constraints to training when ranges are not in proximity. In these situations, units are required to transport the Shadow and its associated equipment from the motor pool to the training range, assemble and disassemble the aircraft, and transport the equipment back to the motor pool. Officials we spoke with at one Shadow platoon estimated that these steps required more than 3 hours to complete, thereby limiting the amount of flight training that can be performed during one day. This practice may also lead to a more rapid degradation of aircraft components. Officials told us that the frequent assembling and disassembling of aircraft increases the wear and tear on components, which could increase maintenance costs. While the Army maintains a process for installations to request a waiver from the policy that would allow for the construction of unique aircraft facilities, officials told us that the Army is reevaluating whether the Shadow requires unique facilities. Any decision to change the policy on Shadow facilities would ultimately increase total program costs. Because systematic analyses of facility needs for UAS programs have not been conducted, the total costs to provide facilities for Air Force and Army UAS programs are uncertain and have not been fully accounted for in program cost estimates that are used by decision makers to evaluate the affordability of these programs. Further, although costs for facilities were not included in these estimates, our analysis of DOD’s budget requests for fiscal year 2007 through fiscal year 2010 found that the Air Force and the Army have sought more than $300 million to construct facilities for UAS. Moreover, as these services finalize assessments of the number and type of facilities required for UAS operations and field additional systems, they will likely request additional funds for facilities. For example, Army officials told us that cost estimates for ERMP facilities would be unavailable until all of the ongoing requirements assessments were complete; however, our analysis of the Army’s facility plans for the ERMP estimates that the Army could request more than $600 million to construct facilities for this program alone. In general, the military services operate UAS using two different operational concepts. For example, Army and Marine Corps units primarily conduct UAS operations through a line-of-sight operational concept. As depicted in figure 1, UAS are launched, operated, and landed in this concept nearby the ground units that they support and are controlled by a ground station that is also nearby. In this concept, UAS can also transmit video and data to ground units or other aircraft within line of sight to support a range of missions, such as reconnaissance, surveillance, and target acquisition. Some level of risk is introduced in a line-of-sight operational concept if the command and control links to the aircraft are not secure. Air Force and Navy units use this line-of-sight concept but also use a beyond-the-line-of-sight operational concept that increases the risk of a disruption in operations. In this concept, the operation of the UAS relies on additional equipment and networks, some of which are located outside of the country where the UAS operations occur. According to Air Force officials, the use of a beyond-the-line-of-sight concept permits the service to conduct UAS operations with limited numbers of personnel and equipment deployed within an operational theater. As in the line-of-sight concept, the UAS are launched and landed by deployed ground control stations; however, the UAS are controlled during missions by a pilot and sensor operator located at a fixed ground control station located at a remote site. A satellite relay site delivers the signals between the UAS and the ground control station at the remote site (see fig. 2). The Air Force currently employs this operational concept for Predator, Reaper, and Global Hawk UAS missions that support contingency operations in Iraq and Afghanistan. For these missions, a ground control station located within the United States takes control of the aircraft. A satellite relay site at a fixed location (located outside of the continental United States) relays signals from the ground control station to the UAS so that they can communicate. Any disruptions at the satellite relay site caused, for example, by a natural or man-made disaster could affect the number of UAS operated under this concept. DOD assesses risks and vulnerabilities to its critical assets and installations using the Defense Critical Infrastructure Program and other mission assurance programs and efforts, including those related to force protection, antiterrorism, continuity of operations, and installation preparedness. For example, Air Force doctrine dated June 2007 calls for the establishment of backup or redundant command and control systems for high-value systems so that operations can continue in the event of failure or damage of the primary system. This doctrine further states that planning for redundant command and control systems should be formalized and exercised before military operations begin. However, the Air Force has not established an alternate, redundant satellite relay site with the capacity to control all UAS missions that are supporting ongoing combat operations. Because of the satellite relay’s critical importance in supporting ongoing contingency operations, the Air Force is taking steps to establish a redundant satellite relay site to support UAS missions in the event of disruptions at the current location. For example, officials told us that the Air Force is acquiring new communications equipment with increased capacity for the current site, which will allow equipment currently in use to be available for other locations. In addition, the Air Force is seeking funds to conduct surveys to identify potential locations to establish a redundant satellite relay site. However, officials stated that these efforts are not scheduled to be completed until fiscal year 2012, at the earliest. Air Force officials also told us that they would have options to pursue in the event of a near-term disruption at the satellite relay site, such as relocating assets from other Air Force operations. At the time of our work, however, the Air Force had not conducted a detailed analysis of these options to determine the extent to which they would provide for the continuity of UAS operations, or established a specific milestone to formalize a plan that could be implemented quickly in the event of a disruption. Several factors have contributed to a lag in Air Force and Army planning for the personnel, facilities, and some communications infrastructure that are integral to the operation of UAS. For example, although DOD’s primary requirements definition process—termed the Joint Capabilities Integration and Development System—encourages acquisition personnel to develop cost estimates for its new weapon systems programs, including consideration of various support factors, the Air Force’s current UAS programs were, for the most part, initially developed and fielded as technology demonstrations. According to the Air Force, these programs have been subsequently approved within the Joint Capabilities Integration and Development System, but comprehensive life cycle plans that fully account for the personnel, facilities, and communications infrastructure to effectively manage the systems have not yet been completed. Further, to meet near-term warfighter demands for these capabilities, several UAS programs have been expanded beyond planned force structure levels and, in some cases, have been fielded more rapidly than originally planned. Given the changes in program requirements in the near term, the Air Force and the Army have, for example, in the case of the Air Force Predator and the Army Shadow programs, taken measures to support UAS inventories. However, these measures have been taken without the benefit of rigorous planning for the specific numbers and types of personnel and facilities and some communications infrastructure that are needed to support these programs in the long term. Finally, while DOD components are expected to identify deficiencies in their strategies to support weapon systems programs and to make necessary adjustments to them as requirements change, the Air Force and the Army have not completed the analyses or developed plans to account for new personnel and facility requirements, and the Air Force has not developed a plan to ensure the communications infrastructure needed to support its UAS programs. In the absence of detailed action plans that fully account for these factors and include milestones for tracking progress and synchronize funding and personnel, DOD cannot have a reasonable assurance that these services’ approaches will fully support current and projected increases in UAS inventories. In addition, the lack of comprehensive plans limits the visibility of decision makers to evaluate the total resources required to support UAS inventories and to make informed choices about funding one program over another. Prior work shows that in order to improve the management of federal activities, it is important that agencies develop comprehensive strategies to address challenges that threaten their ability to meet long-term goals. We identified several initiatives that DOD has commenced to address UAS training challenges, but DOD lacks a results-oriented strategy to ensure that compatible goals and outcomes are achieved among these initiatives. Many of DOD’s UAS operations take place outside of U.S. airspace, but DOD requires access to the national airspace system for training, to conduct operations such as homeland defense, and for the transit of unmanned aircraft to overseas deployment locations—requirements that have created airspace access challenges. For example, according to Army officials, a single Shadow UAS platoon requires more than 3,000 flight hours per year to fully train all aircraft operators. Because UAS do not meet various federally mandated requirements and therefore do not have routine access to the national airspace system, personnel must train in DOD-managed airspace and training ranges located near their home stations. Competing for this finite airspace are other units located at home stations that also require access to DOD-managed airspace for their operations, such as manned aircraft training. This competition, among other factors, has affected the amount of training UAS personnel can conduct and their ability to prepare for deployments. Army officials with four of the seven Shadow platoons we met with told us that they were unable to fully train the number of personnel needed to perform continuous combat missions before they deployed for overseas operations. As a result, UAS personnel had to conduct additional training tasks upon arrival in Iraq and Afghanistan. Plans to further increase UAS inventories on selected military installations will likely further increase the demand for airspace. For example, the Army plans to increase the number of Shadow UAS from about 70 systems fielded at the time of our review to a goal of more than 100 systems by fiscal year 2015. According to current plans, all active and reserve component combat brigades, Army Special Forces units, fires brigades, and battlefield surveillance brigades will be provided with Shadow systems. In some cases, relocations of UAS to different installations have resulted in increased UAS inventories at the new installations. For example, in 2009, the Army moved the 4th Infantry Division and two combat brigades from Fort Hood, Texas, to Fort Carson, Colorado. This move resulted in the addition of two Shadow systems on Fort Carson. Army officials acknowledged that increases in UAS inventories will further complicate the competition for limited quantities of DOD-managed airspace. As more advanced UAS are fielded in greater numbers, the military services will require increased access to the national airspace system. For example, the Army has fielded the ERMP UAS to its training battalion at Fort Huachuca, Arizona, and plans to provide one system, comprising 12 aircraft, to each of its active component combat aviation brigades. Because these aircraft are designed to operate at higher altitudes and possess capabilities beyond those on the Shadow UAS, officials told us that personnel who are responsible for operating the ERMP will require access to airspace that they cannot currently access to conduct training. Similarly, the Air Force requires expanded access to the national airspace system to train pilots who operate its UAS, and also to move aircraft, such as the Global Hawk, from bases in the United States to operational theaters around the world. Because UAS do not possess “sense and avoid” technology mandated by federal requirements for safe and efficient operations, the military services must provide, in many cases, an air- or ground-based observer of the aircraft during its flight in the national airspace system. According to DOD and military service officials, this restriction negates many of the most effective advantages of UAS, such as aircraft endurance, and creates an impractical requirement given the numbers of aircraft and personnel that are needed to monitor the unmanned aircraft during training. Moreover, the practice may be an unsustainable solution for meeting the demands of the military services’ growing inventories of UAS. DOD estimated in a December 2008 report that based on planned UAS inventories in fiscal year 2013, the services will require more than 1 million flight hours to train UAS personnel within the United States. In recent years, DOD has taken several actions to integrate UAS into the national airspace system. For example, in November 2004, DOD issued an airspace integration plan for unmanned aviation. The plan established timelines and program milestones to achieve a goal that DOD’s UAS would have safe, routine use of the national airspace system by 2010 while maintaining an equivalent level of safety to that of an aircraft with a pilot on board. In 2007, DOD convened a UAS Task Force with the participation of the Federal Aviation Administration and the Department of Homeland Security to find solutions to overcome the restrictions that limit the integration of UAS in the national airspace system, among other tasks. According to an official with the task force, DOD is in the process of revising the airspace integration plan by October 2010 to include near-, mid-, and long-term actions that DOD can take in concert with other federal agencies to improve the integration of UAS in the national airspace system. In our prior work, however, we reported that although some progress has been made to provide increased access to the national airspace system for small UAS, routine access for all types of UAS may not occur for a decade or more. The Congress has also raised questions about the progress made by DOD and other federal agencies in developing an approach to enable greater access for the department’s UAS to the national airspace system. In the National Defense Authorization Act for Fiscal Year 2010, the Congress directed DOD and the Department of Transportation to jointly develop a plan to provide the military services’ UAS with expanded national airspace system access. The plan, which is due April 2010, is to include recommendations concerning policies for the use of the national airspace system and operating procedures that should be implemented by both DOD and the Department of Transportation to accommodate UAS assigned to any state or territory of the United States. Army ground combat units and Air Force UAS units primarily train together at the Army’s large training centers and not at home stations. In the United States, the Army has two large training centers—the National Training Center at Fort Irwin, California, and the Joint Readiness Training Center at Fort Polk, Louisiana. Army ground combat units conduct 2-week mission rehearsal exercises at one of these training centers before deploying for ongoing operations. The Air Force, however, has UAS stationed in the United States only near the National Training Center, so Air Force UAS do not support Army training exercises at the Joint Readiness Training Center. At the National Training Center, several factors limit the time Air Force UAS are available to support ground unit training. First, considerable numbers of Air Force UAS personnel and equipment items are supporting overseas contingency operations and therefore are unavailable to participate in training exercises in a joint environment. Air Force officials with the 432nd Wing, the unit that operates Air Force’s Predator and Reaper UAS, told us that all of its unmanned aircraft are deployed to support overseas operations except for those that are supporting the initial training of UAS personnel or the testing of aircraft. These officials stated that in the event that additional aircraft were made available, the wing’s personnel levels are insufficient to support additional training events because the unit does not have adequate personnel to support projected operational commitments and greater numbers of training exercises. Second, Army and Air Force officials told us that when Air Force UAS are at the training center, these aircraft are not always available to support ground unit training because a considerable portion of the UAS flight time is dedicated to accomplishing Air Force crewmember training tasks. Officials told us that the Army and Air Force have reached an informal agreement to allot about half of the time that an Air Force UAS is flying at the training center to support Army ground unit training objectives and the other half to accomplish Air Force training tasks. Air Force officials pointed out that although they try to align their crewmember training syllabi with ground unit training objectives at the National Training Center, training new personnel to operate these aircraft is their priority. Third, UAS may not be available during certain hours to support ground unit training, which can occur on a 24-hour schedule. For example, Predator UAS from the California Air National Guard are available to support ground units only during daylight hours. To travel to the training center, these aircraft must pass through segments of national airspace that are not restricted for DOD’s use and therefore must rely on a ground-based observer or on chase aircraft to follow them to and from the training center. Because of this reliance on ground or airborne observers, flights to and from the training center must be accomplished during daylight hours. As a result of the limited number of unmanned assets that are available to support ground unit training at the National Training Center and the Joint Readiness Training Center, Army ground units conducting training exercises have relied on manned aircraft to replicate the capabilities of the Air Force’s Predator and Reaper UAS. Officials told us that the use of manned aircraft in this role permits ground units to practice the process to request and integrate the capabilities provided by Air Force UAS in joint operations. However, this practice is not optimal as the manned aircraft do not replicate all of the capabilities of the Predator and Reaper aircraft, such as longer dwell times. At the time of our work, DOD was analyzing the utilization of manned aircraft for this purpose in order to assess whether there is a need for additional UAS to support joint training. Additionally, when UAS are available to support ground unit training, we found that several factors affect the ability of ground combat units to maximize the use of available assets during training exercises. Officials we spoke with at the National Training Center pointed out that the effective integration of UAS in training exercises, like the integration of other types of joint air assets, depends on the priority that ground units place on developing training objectives that require the participation of joint air assets and their ability to plan for the use of these assets in the exercise. An Army Forces Command official stated that Army combat brigades often focus UAS training objectives during exercises on integrating their Shadow UAS and do not emphasize planning for and employing Air Force UAS. This is consistent with challenges that DOD has found in the integration of other joint air assets with ground unit training at the Army’s training centers. A 2009 U.S. Joint Forces Command study found that although the National Training Center provides well-designed training environments to integrate Air Force aviation assets to support combat brigade training, a lack of adequate pre-exercise planning resulted in aircraft that were not fully integrated with ground combat units in training scenarios. The study recommended that to improve the integration of joint air assets into ground training, ground units should conduct planning meetings with Air Force organizations early in the training process to identify mutually supporting training objectives and to synchronize air assets to achieve these training objectives. DOD officials have indicated that UAS simulators can play an essential role in providing training opportunities for UAS personnel. Specifically, simulators may allow personnel to repetitively practice tactics and procedures and to meet training proficiency requirements without the limitations of airspace constraints or range availability. UAS are particularly well-suited for simulation training given that UAS vehicle and sensor operators rely on video feeds to perform operations, and DOD and service officials have indicated that current simulators have been used to complete initial training tasks for UAS vehicle and sensor operators. DOD’s current UAS simulators have limited capabilities, however, to enhance training. For example, a recent study performed for DOD found critical deficiencies in each of the UAS training simulators evaluated. In particular, the study found that the military services lacked simulators that were capable of supporting training that is intended to build proficiency in skills required of UAS vehicle and sensor operators and prepare these personnel to conduct UAS combat missions. During our review, we also found several key deficiencies that limit the ability of Air Force and Army simulators to be used for training—including the inability of some simulators to replicate all UAS procedures and to enable the integration of UAS training with other types of aircraft. For example, Air Force officials told us that the Reaper simulator will initially be fielded without weapons- release capabilities, which would enable UAS personnel to replicate the procedures used to attack targets, and this capability will not be available until fiscal year 2011. Similarly, the Army’s Shadow Institutional Mission Simulator is not currently capable of replicating system upgrades that are being fielded directly to ongoing combat operations, such as a laser target designator and communications relay equipment. As a result, Shadow unit personnel expressed concern that they would be unable to train with these capabilities prior to their deployment. Air Force and Army simulators are also currently incapable of providing virtual, integrated training opportunities between manned and unmanned aircraft because of interoperability and information security concerns. For example, the Air Force’s Predator and Reaper simulators are not interoperable with the Air Force’s Distributed Mission Operations Network, which creates a virtual training network for Air Force aviation assets. Officials told us that the Predator and Reaper simulators do not meet Air Force information security requirements for the Distributed Mission Operations Network, which precludes these simulators from participating in virtual integrated training exercises. Similarly, the Army’s Shadow Institutional Mission Simulator is not fully interoperable with the Army’s manned aviation simulator (the Aviation Combined Arms Tactical Trainer) because of differences in the two simulators’ software. According to Army officials, the lack of interoperability of the two simulators detracts from the training value that UAS personnel would receive by performing virtual integrated training with other types of Army aviation assets. Moreover, the Air Force and the Army have not fully developed comprehensive plans that address long-term UAS simulator requirements and associated funding needs. The Air Force, for example, has not finalized plans to address its UAS simulator goals. Some goals established within the Air Force’s UAS Flight Plan, such as the development of high- fidelity simulators, are expected to be completed in fiscal year 2010. However, we found that other goals are not linked with the Air Force’s funding plans. For example, while officials recognize the training benefit of connecting the Predator and Reaper simulators to the Distributed Mission Operations Network, the Air Force has not identified funds within its future funding plans for this initiative. The Army has not fully defined the number and type of simulators that its active component forces require to meet the training needs of personnel who operate the Shadow and ERMP UAS or the resources needed to acquire these systems. Army officials told us that steps to determine simulator needs are ongoing. Specifically, the Army has commissioned the Army Research Institute to complete a simulator requirements study by October 2010 and it has developed an initial UAS simulation strategy. In contrast, the Army National Guard has begun to acquire a simulator to train soldiers who operate the Guard’s Shadow UAS based on the results of a study it completed in 2007 to validate its simulator needs. DOD has identified several challenges that affect service and joint UAS training and has commenced several initiatives intended to address them, but DOD has not developed a comprehensive, results-oriented strategy to prioritize and synchronize these initiatives. A leading practice derived from principles established under the Government Performance and Results Act of 1993 is that in order to improve the management of federal agencies, it is important that agencies develop comprehensive strategies to address management challenges that threaten their ability to meet long- term goals. We have previously reported that these types of strategies should contain results-oriented goals, performance measures, and expectations with clear linkages to organizational, unit, and individual performance goals to promote accountability and should also be clearly linked to DOD’s key resource decisions. To address UAS training challenges, DOD has launched a number of initiatives to identify requirements for UAS access to national airspace, to identify available training airspace at current and proposed UAS operating locations, to improve joint training opportunities for ground units and UAS personnel, and to recommend effective training methods and UAS simulator equipment, and these initiatives are at various stages of implementation. Table 3 provides a summary of select DOD organizations and initiatives that are intended to address UAS training challenges. At the time of our review, DOD’s initiatives to improve UAS training were at varying stages of implementation. For example, the Office of the Secretary of Defense’s effort to identify UAS airspace and training range requirements was established in October 2008 by the Under Secretary of Defense for Personnel and Readiness. Officials told us that as of January 2010, the team had completed initial meetings and data collection with military service and combatant command officials. As a result of these initial steps, the team has identified specific actions that DOD should take to improve UAS training and airspace access, which include documenting UAS training requirements, establishing criteria for UAS basing decisions, and identifying supporting training infrastructure needs. Further, the Joint UAS Center of Excellence initiated an effort to analyze UAS integration at predeployment training centers in March 2009, and according to officials, they have collected data on UAS training at the National Training Center at Fort Irwin, California, and the Marine Corps Air Ground Combat Center, Twentynine Palms, California. We have previously reported that the Office of the Secretary of Defense’s UAS Task Force, established in October 2007, is addressing civil airspace integration planning and technology development, among other issues. Although many defense organizations are responsible for implementing initiatives to resolve UAS training challenges and to increase UAS access to the national airspace system, DOD has not developed a comprehensive plan to prioritize and synchronize these initiatives to ensure that compatible goals and outcomes are achieved with milestones to track progress. Officials with the Office of the Secretary of Defense who are identifying the amount of DOD-managed airspace at planned UAS operating locations told us that one of their first efforts was to determine whether DOD had developed a comprehensive strategy for UAS training, but that they found that no such strategy existed. These officials also stated that while they intended to complete efforts to improve UAS training and airspace access within 18 months, they had not established specific milestones to measure progress or identified the resources required to achieve this goal. Absent an integrated, results-oriented plan to address the challenges in a comprehensive manner, DOD will not have a sound basis for prioritizing available resources, and it cannot be assured that the initiatives it has under way will fully address limitations in Air Force and Army training approaches. Battlefield commanders and units have increased the operational experience with UAS and have used these assets in innovative ways, underscoring the need for complete and updated UAS publications. We identified several factors that create challenges to incorporating new knowledge regarding UAS practices and capabilities into formal publications in a comprehensive and timely way. DOD components have produced several UAS publications, including service doctrine; multiservice and service-specific tactics, techniques, and procedures; and a joint concept of operations, which are intended to provide military personnel with information on the use of these systems, to address interoperability gaps, and to facilitate the coordination of joint military operations. These publications serve as the foundation for training programs and provide the fundamentals to assist military planners and operators to integrate military capabilities into joint operations. For UAS operations, such stakeholders include both manned and unmanned aircraft operators, military planners in joint operations, and ground units that request UAS assets. Because military personnel involved in joint operations may request or employ assets that belong to another service, they need comprehensive information on the capabilities and practices for all of DOD’s UAS. However, many of DOD’s existing UAS publications have been developed through service-specific processes and focus on a single service’s practices and UAS, and they contain limited information on the capabilities that the other services’ UAS could provide in joint operations. This information would assist military personnel at the operational and tactical levels of command to plan for the optimal use of UAS in joint operations and determine the best fit between available UAS capabilities and mission needs. Furthermore, military personnel who are responsible for the effective integration of UAS with other aviation assets in joint operations, such as air liaison officers and joint aircraft controllers, require knowledge beyond a single service’s UAS assets and their tactics, techniques, and procedures. To effectively integrate UAS, these service personnel require information that crosses service boundaries, including capabilities, employment considerations, and service employment procedures for all UAS that participate in joint operations. An internal DOD review of existing key UAS publications conducted in 2009 also found that most of these documents are technical operator manuals with limited guidance to assist military planners and ground units on the employment of UAS in joint operations. For example, the review suggests that military planners and personnel who request the use of UAS assets require additional guidance that links UAS performance capabilities to specific mission areas so that there is a clear understanding of which UAS offer the optimal desired effects. Additionally, these stakeholders also require comprehensive information on UAS planning factors and the appropriate procedures for UAS operators to assist with mission planning. In addition, many key publications do not contain timely information. DOD officials told us that existing publications are due for revision given the rapidly expanding capabilities of UAS and the utilization of these assets in joint operations. As a result, information on UAS practices and capabilities described in these publications is no longer current. For example, DOD’s multiservice tactics, techniques, and procedures manual for the tactical employment of UAS was last updated in August 2006. According to officials with whom we spoke, the document does not contain detailed information on UAS operations in new mission areas, such as communication relay, fires, convoy support, and irregular warfare. Although DOD components have established milestones to revise UAS publications, in some cases, these efforts have not been successful. For example, the Air Force has canceled conferences that were scheduled to occur in prior fiscal years that were intended to revise the tactics, techniques, and procedures manuals for the Predator UAS because, according to officials, key personnel were supporting overseas operations and were therefore unavailable to participate in the process. As a result, these publications have not been formally updated since 2006, and Air Force officials acknowledged to us that these manuals do not reflect current tactics and techniques. While past attempts to revise these publications have been unsuccessful, the Air Force has scheduled another conference in 2010 to revise the Predator publications. Documenting timely information on the use of UAS in ongoing joint operations is important because commanders and units are increasing their operational experience with these new weapon systems. As a result, military personnel have often developed and used new approaches to employ UAS, which may differ or build upon approaches outlined in existing publications. For example, according to officials, the use of UAS in ongoing operations has contributed to the development of new tactics for the employment of UAS in counterinsurgency operations—information that has not previously been included in DOD’s publications. Officials told us that although publications have not been formally updated, some units, such as Air Force UAS squadrons, maintain draft publications that describe current tactics, techniques, and procedures that are being used in ongoing operations. However, these officials acknowledged to us that while UAS unit personnel have access to these draft documents, other stakeholders, such as military planners and manned aircraft operators, do not have access to the new information contained in the draft publications. In the absence of updated publications, DOD components have captured lessons learned and developed ad hoc reference materials that contain updated information on UAS capabilities to use in training exercises and during joint operations. For example, the military services and U.S. Joint Forces Command’s Joint UAS Center of Excellence maintain Web sites that post lessons learned from recent UAS operations. In addition, warfighter unit personnel with whom we met provided us with several examples of reference materials that were produced to fill voids in published information on current UAS practices. Although this approach assists with documenting new knowledge during the time between publication updates, the use of lessons learned and reference materials as substitutes for timely publications can create challenges in the long term. Namely, these materials may not be widely distributed within DOD, and the quality of the information they contain has not been validated since these materials have not been formally vetted within the normal publication development and review process. Several factors create challenges to incorporating new knowledge about UAS practices and capabilities into formal publications in a comprehensive and timely way. Because the military services, in some cases, have rapidly accelerated the deployment of UAS capabilities to support ongoing contingency operations, there has been a corresponding increase in new knowledge on the employment of UAS in joint operations. This creates a challenge in incorporating new knowledge and maintaining current information within UAS publications through the normal publication review process. Military service officials noted that the pace of ongoing operations for UAS subject matter experts has also limited the amount of time that key personnel have been available to revise publications. As one example, Air Force officials told us that the subject matter experts who are normally responsible for documenting new tactics, techniques, and procedures within formal manuals for the service’s Predator and Reaper UAS are the same service personnel who operate these UAS in ongoing operations. Because of the rapid expansion of the number of Air Force UAS supporting operations, the Air Force has not had enough personnel with critical knowledge on the use of these assets to participate in efforts to update its formal UAS publications. Officials told us that conferences scheduled in previous years intended to update the Predator UAS publications and to develop initial publications for the Reaper UAS were postponed because key personnel were supporting operations and were therefore unavailable to attend the conferences. In 2008, the Air Force established a new squadron at the Air Force Weapons School to develop tactical experts for the service’s UAS. According to officials, personnel within the squadron will play a key role in conferences scheduled in fiscal year 2010 that are intended to revise the tactics, techniques, and procedures manuals for both the Predator and Reaper UAS. We recognize that the pace of operations has strained the availability of key subject matter experts to document timely information in UAS publications, but the military services have not, in some cases, assigned personnel to positions that are responsible for UAS publication development. For example, in 2006, the Air Force established the 561st Joint Tactics Squadron on Nellis Air Force Base, comprising multiservice personnel, with the primary mission to provide timely development and update of tactics, techniques, and procedures publications. However, the squadron did not have UAS subject matter experts on staff who would be responsible for finalizing UAS publications and documenting procedures for the integration of UAS in combat operations, such as in the areas of airspace management and fire support coordination. Squadron officials told us that as of August 2009, the Air Force had not filled its UAS expert positions because of personnel shortfalls throughout the UAS community and the Army had not filled its positions despite agreements between Army and Air Force leadership to do so. According to officials, the lack of these experts also limits the squadron’s ability to collect and validate emerging UAS tactics and to disseminate these emerging tactics to warfighters who are preparing to deploy for overseas contingency operations. Additionally, while a DOD directive makes the services responsible for participating with one another to develop publications for those UAS that are common among the services, they have not yet done so. To their credit, the Army and the Air Force completed a concept in June 2009, which presents a common vision for the services to provide theater- capable, multirole UAS to support a joint force commander across the entire spectrum of military operations. The Army and Air Force view this concept as the first step to improving service-centric UAS procedures, and among other tasks, the services intend to update joint doctrine and tactics, techniques, and procedures for multirole UAS capabilities. However, we found that in several instances, the military services worked independently to develop publications for common UAS and did not maximize opportunities to share knowledge and work collaboratively. The lack of collaboration during the development of publications can limit the sharing of lessons learned and best practices that have been established through the use of UAS in operations. For example: In 2009, the Air Force developed the first tactics, techniques, and procedures manual for the Global Hawk UAS, but did not collaborate with the Navy on the process to develop this publication. The Navy is using a similar unmanned aircraft for its Broad Area Maritime Surveillance and has begun operating a version of this UAS to support ongoing operations. At the time of our work, the Marine Corps was finalizing its tactical manual for the Shadow UAS, which the service began to deploy in fiscal year 2008. However, the Marine Corps had limited collaboration with the Army in the development of this publication, despite the fact that Army ground units have considerable operational experience employing the Shadow UAS system and have been operating it since 2002. We were told that the Air Force did not plan to invite the Army to participate in the process scheduled for 2010 to update the Predator UAS tactics manuals. In 2009, the Army began to deploy an initial version of the ERMP UAS, which is similar in design and performance to the Predator. The lack of comprehensive and timely publications that are written for a range of stakeholders limits the quality of information that is available to serve as the foundation for effective joint training programs and to assist military planners and operators in integrating UAS on the battlefield. Warfighter demand for UAS has fueled a dramatic growth in DOD’s programs and the military services have had success providing assets to military forces supporting ongoing operations. However, the rapid fielding of new systems and the considerable expansion of existing Air Force and Army programs has posed challenges for military planners to fully account for UAS support elements, such as developing comprehensive plans that account for the personnel and facilities needed to operate and sustain UAS programs and ensure the communications infrastructure that is necessary to control UAS operations. While the Air Force and the Army have implemented various actions to address UAS support elements, these actions in many cases have not been guided by a rigorous analysis of the requirements to support UAS programs in the long term or the development of plans that identify milestones for completing actions and synchronize the resources needed for implementation. In the absence of plans that fully account for support elements and related costs, DOD cannot be reasonably assured that Air Force and Army approaches will provide the level of support necessary for current and projected increases in UAS inventories. Moreover, the lack of comprehensive plans limits the ability of decision makers to evaluate the total resources needed to support UAS programs and to make informed future investment decisions. Furthermore, the challenges regarding UAS training may be difficult to resolve unless DOD develops a comprehensive and integrated strategy to prioritize and synchronize the initiatives it has under way to address limitations in Air Force and Army training. Lastly, without assigning personnel or taking steps to coordinate efforts to update and develop UAS publications, information in UAS publications will not be comprehensive and therefore will not include new knowledge on UAS practices and capabilities. This has the potential to limit the quality of information that is available to serve as the foundation for effective joint training programs and to assist military planners and operators in integrating UAS on the battlefield. We recommend that the Secretary of Defense take the following five actions: To ensure that UAS inventories are fully supported in the long term, we recommend that the Secretary of Defense direct the Secretary of the Air Force and the Secretary of the Army, in coordination with the Under Secretary of Defense for Acquisition, Technology and Logistics, to conduct comprehensive planning as part of the decision-making process to field new systems or to further expand existing capabilities to account for factors necessary to operate and sustain these programs. At a minimum, this planning should be based on a rigorous analysis of the personnel and facilities needed to operate and sustain UAS and include the development of detailed action plans that identify milestones for tracking progress and synchronize funding and personnel. To ensure that the Air Force can address the near-term risk of disruption to the communications infrastructure network used to control UAS missions, we recommend that the Secretary of Defense direct the Secretary of the Air Force to establish a milestone for finalizing a near- term plan to provide for the continuity of UAS operations that can be rapidly implemented in the event of a disruption and is based on a detailed analysis of available options. To ensure that DOD can comprehensively resolve challenges that affect the ability of the Air Force and the Army to train personnel for UAS operations, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness, in coordination with the military services and other organizations as appropriate, to develop a results-oriented training strategy that provides detailed information on the steps that DOD will take to identify and address the effects of competition and airspace restrictions on UAS training, increase the opportunities that Army ground units and Air Force UAS personnel have to train together in a joint environment, maximize the use of available assets in training exercises, and upgrade UAS simulation capabilities to enhance training. At a minimum, the strategy should describe overarching goals, the priority and interrelationships among initiatives, progress made to date, milestones for achieving goals, and the resources required to accomplish the strategy’s goals. To help ensure that all stakeholders, including unmanned aircraft operators, military planners, and ground units, have comprehensive and timely information on UAS practices and capabilities, we recommend that the Secretary of Defense direct the Secretary of the Air Force and the Secretary of the Army to assign personnel to update key UAS publications. We also recommend that the Secretary of Defense direct the Secretary of the Air Force, the Secretary of the Army, and the Secretary of the Navy to take steps to coordinate the efforts to develop publications for those UAS where there is commonality among the services. In written comments on a draft of this report, DOD concurred with four recommendations and partially concurred with one recommendation. DOD’s comments are reprinted in appendix II. DOD also provided technical comments, which we incorporated into the report as appropriate. DOD concurred with our recommendation to direct the Secretary of the Air Force and the Secretary of the Army, in coordination with the Under Secretary of Defense for Acquisition, Technology and Logistics, to conduct comprehensive planning as part of the decision-making process to field new systems or to further expand existing capabilities to account for factors necessary to operate and sustain these programs that at a minimum, is based on a rigorous analysis of the personnel and facilities needed to operate and sustain UAS and include the development of detailed action plans that identify milestones for tracking progress and synchronize funding and personnel. DOD stated that the department conducts ongoing analysis to determine personnel requirements, necessary capabilities for emerging and maturing missions, basing, and training requirements as part of the military services’ processes for fielding new systems and expanding existing capabilities and that this planning is based on internal studies as well as rigorous computer modeling, which provides detailed projections of personnel requirements based on anticipated growth and training capacity. DOD further stated that these plans take into account factors that are necessary to operate and sustain UAS, which are applied in order to synchronize funding and personnel. DOD also noted that some planning factors are variable over time and are regularly reassessed in order to validate plans or drive necessary changes. As discussed in the report, the Air Force and the Army are conducting analyses of factors, such as personnel and facilities, which are required to operate and sustain current and projected UAS force levels. However, although the services are requesting funds, they have not finalized ongoing analyses or fully developed plans that specify the actions and resources required to supply the personnel and facilities that are needed to support these inventories in the long term. Therefore, we reiterate our recommendation that as DOD makes decisions to further expand UAS inventories, it needs to ensure that the Air Force and the Army conduct extensive planning, to include performing the necessary analyses for these factors, so that decision makers have complete information on total program costs and assurances that weapon system programs can be fully supported. DOD concurred with our recommendation to direct the Secretary of the Air Force to establish a milestone for finalizing a near-term plan to provide for the continuity of operations that can be rapidly implemented in the event of a disruption to the communications infrastructure network used to control UAS missions that is based on a detailed analysis of available options. DOD stated the Air Force is conducting a site selection process for identifying a second satellite relay location and that until the alternate site has been selected and funding secured, the Air Force has mitigated risk of communication disruption with a plan for acquiring and positioning backup equipment for the existing satellite relay site. We state in the report that at the time of our review, the Air Force had not conducted a detailed analysis of available options, such as repositioning backup equipment, to determine the extent to which they would provide for the continuity of UAS operations and it had not established a specific milestone to formalize a plan that could be implemented quickly in the event of a disruption. We are encouraged by DOD’s statement that the Air Force has since developed a continuity plan. Although we did not have the opportunity to review the plan’s contents, we would expect that it is based on a detailed analysis of the equipment that is required to provide a redundant communications capability at the existing satellite relay site and that it includes specific milestones for acquiring and positioning new equipment in the near term. DOD concurred with our recommendation to direct the Under Secretary of Defense for Personnel and Readiness, in coordination with the military services and other organizations as appropriate, to develop a results- oriented training strategy that provides detailed information on the steps that DOD will take to identify and address the effects of competition and airspace restrictions on UAS training; increase the opportunities that Army ground units and Air Force UAS personnel have to train together in a joint environment; maximize the use of available assets in training exercises; and upgrade UAS simulation capabilities to enhance training. This strategy should, at a minimum, describe overarching goals, the priority and interrelationships among initiatives, progress made to date, milestones for achieving goals, and the resources required to accomplish the strategy’s goals. DOD stated that the office of the Under Secretary of Defense for Personnel and Readiness has work under way to address this recommendation and that organizations, including the offices of the Under Secretary of Defense for Personnel and Readiness and the Under Secretary of Defense for Acquisition, Technology and Logistics, the Joint UAS Center of Excellence, and the military services, are participating on a team to facilitate identifying UAS training requirements and develop a concept of operations for UAS training. DOD further stated that upon completion of the concept, the department will develop and implement a mission readiness road map and investment strategy. DOD partially concurred with our recommendation to direct the Secretary of the Air Force and the Secretary of the Army to assign personnel to update key UAS publications. DOD stated that military personnel are updating regulations that govern training, certification, and operational guidance for UAS personnel. DOD also stated that the military services are active participants in the process for updating key joint guidance, such as joint publications and other tactics documents, and that the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics is initiating development of the third edition of the Unmanned Systems Roadmap and the Joint UAS Center of Excellence is writing the third version of the Joint Concept of Operations for Unmanned Aircraft Systems. DOD further stated that guidance on UAS tactics, techniques, and procedures should be incorporated into joint functional guidance rather than the update of documents that are dedicated only to UAS tactics, techniques, and procedures. We state in our report that DOD components, such as the military services and other defense organizations, have produced several publications, including joint and service doctrinal publications, that describe processes to plan for and integrate UAS into combat operations. We also state in the report that DOD components have produced UAS-specific publications, such as multiservice and platform- specific tactics, techniques, and procedures manuals. However, we identified many cases where DOD’s UAS publications did not incorporate updated information needed by military personnel to understand current practices and capabilities, and we found that the military services have not, in some instances, assigned personnel to positions that are responsible for UAS publication development. This has the potential to limit the quality of information that is available to serve as the foundation for effective joint training programs and to assist military planners and operators in integrating UAS on the battlefield. Therefore, we continue to believe that our recommendation has merit. DOD concurred with our recommendation to direct the Secretary of the Air Force, the Secretary of the Army, and the Secretary of the Navy to take steps to coordinate the efforts to develop publications for those UAS where there is commonality among the services. DOD stated that coordination to develop publications where commonality exists between UAS is occurring. For example, DOD stated that the Army and Air Force Theater-Capable Unmanned Aircraft Enabling Concept was approved in February 2009. According to DOD, this document outlines how the two services will increase the interoperability of similar systems, and as a result, planning is under way to identify key publications and incorporate joint concepts. As we note in our report, to their credit, the Air Force and Army concept can serve to improve service-centric UAS procedures. However, we found that in other instances, the military services did not maximize opportunities to share knowledge and work collaboratively in the development of UAS publications where there is commonality among the services, which can limit the sharing of lessons learned and best practices that have been established through the use of UAS in operations. Therefore, we reiterate the need for the military services to coordinate the efforts to develop publications for those UAS where there is commonality among the services. We are sending copies of this report to the Secretary of Defense, the Secretary of the Air Force, the Secretary of the Army, the Secretary of the Navy, and the Commandant of the Marine Corps. This report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any question about this report, please contact me at (202) 512-9619 or pickups@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To address our objectives, we met with officials from the Office of the Secretary of Defense; the Joint Staff; several unified combatant commands; the Multi-National Forces Iraq; and the Departments of the Air Force, the Army, and the Navy who represent headquarters organizations and tactical units. To determine the extent to which plans were in place to account for the personnel, facilities, and communications infrastructure to support Air Force and Army unmanned aircraft systems (UAS) inventories, we focused primarily on Air Force and Army UAS programs that support ongoing operations. Excluded from this review were programs for small unmanned aircraft. While the military services have acquired more than 6,200 of these aircraft, they generally do not have substantial support requirements. We examined the military services’ UAS program and funding plans, Department of Defense (DOD) policies governing the requirements definition and acquisition processes, and data generated by the Joint Capabilities Integration and Development System—the department’s principal process for identifying, assessing, and prioritizing joint military capabilities and the process used by acquisition personnel to document a weapon system’s life cycle costs (including support costs) to determine whether the associated program is affordable. We analyzed UAS funding requests included in the President’s budget requests for fiscal years 2006 through 2010. We compiled data from the Departments of the Air Force, the Army, and the Navy and the DOD-wide procurement, research, development, test and evaluation, military construction, and operation and maintenance budget justification books. We reviewed documents that detail UAS operational concepts and we interviewed officials with the Office of the Secretary of Defense and the military services to determine whether UAS plans account for the services’ personnel, facilities, and communication infrastructure needs for these concepts, and to determine any actions taken to update UAS plans to more accurately reflect the costs of further expanding UAS programs. We considered all of the information collected on these planning efforts in light of knowledge gained by the services from operational experiences with the use of UAS in ongoing contingency operations. In examining UAS planning documents, we consulted the Office of Management and Budget’s Capital Programming Guide and our Cost Estimating and Assessment Guide for instruction on developing cost estimates and plans to manage capital investments. In determining the extent to which DOD addressed challenges that affect the ability of the Air Force and the Army to train personnel for UAS operations, we visited select military installations and the Army’s National Training Center at Fort Irwin, California, and spoke with knowledgeable DOD and military service officials to determine the specific challenges that the Air Force and the Army faced when training service personnel to perform UAS missions in joint operations. Specifically, we spoke with Air Force and Army personnel in UAS units in the United States and in Iraq to determine the training that they were able to perform prior to operating UAS in joint operations through live-fly training and through the use of simulators. We discussed the challenges, if any, that prevented them from performing required training tasks. In identifying Air Force and Army unit personnel to speak with, we selected a nonprobability sample of units that were preparing to deploy for contingency operations or had redeployed from these operations from May 2009 through September 2009. We examined documents and spoke with DOD and military service officials to identify initiatives that have begun to address UAS training challenges. We assessed DOD’s efforts to overcome these challenges in light of leading practices derived from principles established under the Government Performance and Results Act of 1993, which are intended to assist federal agencies in addressing management challenges that threaten their ability to meet long-term goals, and key elements of an overarching organizational framework, such as developing results-oriented strategies, as described in our prior work. To determine the extent to which DOD updated its existing publications that articulate doctrine and tactics, techniques, and procedures to reflect the knowledge gained from using UAS in ongoing operations, we examined joint, multiservice, and service-specific UAS doctrine, tactics, techniques, and procedures, and concept of operations publications. We interviewed DOD and military service officials to determine which organizational entities require information on UAS capabilities and practices. We examined the publications to determine the level of information provided to various organizations and personnel that are responsible for planning for and employing UAS in joint operations. We also analyzed the publications to determine the degree to which information is provided to the various organizations and personnel that are responsible for planning for and employing UAS in joint operations. Finally, we interviewed DOD and military service officials about the processes used to develop and update publications; any challenges that affect their ability to update key publications; and how new knowledge regarding UAS operations, such as lessons learned and best practices, is captured. We analyzed these processes to determine the level of coordination among the military services to develop UAS publications and the frequency at which documents have been revised. We conducted this performance audit from October 2008 through March 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Patricia Lentini, Assistant Director; Meghan Cameron; Mae Jones; Susan Langley; Ashley Lipton; Greg Marchand; Brian Mateja; Jason Pogacnik; Mike Shaughnessy; and Matthew Ullengren made significant contributions to this report.
The Department of Defense (DOD) requested about $6.1 billion in fiscal year 2010 for new unmanned aircraft systems (UAS) and for expanded capabilities in existing ones. To support ongoing operations, the Air Force and Army have acquired a greater number of larger systems. GAO was asked to determine the extent to which (1) plans were in place to account for the personnel, facilities, and communications infrastructure needed to support Air Force and Army UAS inventories; (2) DOD addressed challenges that affect the ability of the Air Force and the Army to train personnel for UAS operations; and (3) DOD updated its publications that articulate doctrine and tactics, techniques, and procedures to reflect the knowledge gained from using UAS in ongoing operations. Focusing on UAS programs supporting ongoing operations, GAO reviewed the services' program and funding plans in light of DOD's requirements definition and acquisition policy; interviewed UAS personnel in the United States and in Iraq about training experiences; and reviewed joint, multiservice, and service-specific publications. DOD continues to increase UAS inventories, but in some cases, the Air Force and the Army lack robust plans that account for the personnel, facilities, and some communications infrastructure to support them. Regarding personnel, the Air Force and the Army have identified limitations in their approaches to provide personnel to meet current and projected UAS force levels, but they have not yet fully developed plans to supply needed personnel. Further, although DOD has recently requested funding and plans to request additional funds, the Air Force and the Army have not completed analyses to specify the number and type of facilities needed to support UAS training and operations. Having identified a vulnerability to the communications infrastructure network used to control UAS missions, the Air Force is taking steps to mitigate the risk posed by a natural or man-made disruption to the network but has not formalized a plan in the near term to provide for the continuity of UAS operations in the event of a disruption. While DOD guidance encourages planning for factors needed to operate and sustain a weapon system program in the long term, several factors have contributed to a lag in planning efforts, such as the rapid fielding of new systems and the expansion of existing ones. In the absence of comprehensive planning, DOD does not have reasonable assurance that Air Force and Army approaches will support current and projected UAS inventories. The lack of comprehensive plans also limits the ability of decision makers to make informed funding choices. DOD has not developed a results-oriented strategy to resolve challenges that affect the ability of the Air Force and the Army to train personnel for UAS operations. GAO found that the limited amount of DOD-managed airspace adversely affected the amount of training that personnel conducted to prepare for deployments. As UAS are fielded in greater numbers, DOD will require access to more airspace for training; for example, DOD estimated that based on planned UAS inventories in fiscal year 2013, the military services will require more than 1 million flight hours to train UAS personnel within the United States. Further, Air Force UAS personnel and Army ground units have limited opportunities to train together in a joint environment, and they have not maximized the use of available assets during training. Current UAS simulators also have limited capabilities to enhance training. DOD has commenced initiatives to address training challenges, but it has not developed a results-oriented strategy to prioritize and synchronize these efforts. Absent a strategy, DOD will not have a sound basis for prioritizing resources, and it cannot be assured that the initiatives will address limitations in Air Force and Army training approaches. In many cases, DOD's UAS publications articulating doctrine and tactics, techniques, and procedures did not include updated information needed by manned and unmanned aircraft operators, military planners, and ground units to understand current practices and capabilities. Such information can serve as the foundation for effective joint training programs and can assist military personnel in integrating UAS on the battlefield.
Last June, I highlighted the importance of the FBI’s success in transforming itself, noting several basic aspects of a successful transformation as well as the need for broader government transformation. Today, the importance of the FBI’s transformation has not diminished. The FBI continues to stand at the forefront of our domestic intelligence efforts to defend the public from the threat of terrorism, while still maintaining responsibility for investigations of other threats to our public safety such as those from drugs, violent crime, public corruption, and crimes against children. As I pointed out last June, any changes at the FBI must be part of, and consistent with, broader governmentwide transformation efforts that are taking place, especially those resulting from the establishment of the Department of Homeland Security and in connection with the intelligence community. To effectively meet the challenges of the post-September 11, 2001, environment, the FBI needs to consider employing key practices that have consistently been found at the center of successful transformation efforts. These key practices are to ensure that top leadership drives the transformation; establish a coherent mission and integrated strategic goals; focus on a key set of principles and priorities; set implementation goals and a timeline; dedicate an implementation team to manage the process; use a performance management system to define responsibility and establish a communication strategy; involve employees; and build a world class organization that continually seeks to implement best practices. Strategic human capital management is the centerpiece of any change management initiative, including any agency transformation effort. Thus far, we are encouraged by the progress that the FBI has made in some areas in the year since the announcement of phase II of its reorganization. Specifically, the commitment of Director Mueller and senior level leadership to the FBI’s reorganization; the FBI’s communication of priorities; and the FBI’s efforts to realign its activities, processes, and resources warrant recognition. However, a comprehensive transformation plan with key milestones and assessment points to guide its overall transformation effort is still needed. In addition, as I testified last June, the FBI can and should reinforce its transformation efforts through its performance management system by aligning unit, team, and individual employee performance expectations with planned agency goals and objectives. High-performing organizations create a clear linkage—“line of sight”—between individual performance and organizational success and thus transform their cultures to be more results-oriented, customer-focused, and collaborative in nature. This alignment will help FBI employees see the connection between their daily activities and the Bureau’s success. The FBI may already show some indication that its agents see how their work relates to agency priorities. Eighty-five percent of the special agents and 31 of the 34 analysts who completed our questionnaire in the 14 FBI field offices we visited generally or strongly agreed that their daily activities have been consistent with the FBI’s top priorities. Coupled with this alignment is the need for a performance management system that makes meaningful distinctions in performance. The FBI currently uses a pass/fail system to rate its employees’ performance. This type of system does not provide enough meaningful information and dispersion in ratings to recognize and reward top performers, help everyone attain their maximum potential, and deal with poor performers. As a result, the FBI needs to review and revise its performance management system to be in line with revisions to its strategic plan, including desired outcomes, core values, critical individual competencies, and agency transformational objectives. Although a strategic plan is vital to an organization’s transformation effort, the FBI has not completed the update to its strategic plan. At the same time, it has made some progress in its strategic planning efforts. Specifically, the FBI’s Office of Strategic Planning has developed a framework for a revised strategic plan. The operational divisions have made some progress in completing their program plans—the Bureau’s building blocks, in addition to the FBI’s top 10 priorities, for completing a strategic plan. Furthermore, information about the progress of the strategic planning process seems to have been disseminated. Both field office managers and field staff we spoke with generally reported being afforded the opportunity to provide input. Director Mueller, through leadership and management conferences, electronic communications, visits to field offices, messages on the FBI’s intranet, public statements, and press releases, has communicated the FBI’s top priorities. Additionally, the FBI, through a strategic planning reengineering project, is developing a revised strategic management process to better align the planning and budget processes with strategic priorities in the future. The Office of Strategic Planning has developed a framework for the revised strategic plan, and the operational divisions were provided guidance to develop their program plans. According to the FBI, the Counterintelligence and Cyber program plans have been completed, presented to FBI executive management, and approved. The Office of Strategic Planning is in the process of incorporating them into the revised strategic plan. As of June 2003, the Counterterrorism and Criminal Investigative Divisions’ program plans were in the final stages of development. FBI officials indicated that the implementation of two staff reprogrammings and delays in the appropriation of its fiscal year 2003 budget, as well as the war in Iraq, delayed the completion of these program plans. FBI officials estimate that a new strategic plan should be completed by the start of fiscal year 2004. It is unclear, however, whether the FBI will achieve this milestone, and because the plan has not been completed we cannot comment on the quality of its contents. As noted earlier, employee involvement in strategic planning, and transformation in general, is a key practice of a successful agency as it transforms. FBI executive management seems to have recognized this. Based on our discussions with program officials in FBI headquarters and visits to FBI field offices, field management in the 14 field offices we visited reported that they had been afforded opportunities to provide input into the FBI’s strategic planning process. In addition, 68 percent of the special agents and 24 of the 34 analysts who completed our questionnaire reported that they had been afforded the opportunity to provide input to FBI management regarding FBI strategies, goals, and priorities, by among others, participating in focus groups or meetings, and assisting in the development of the field offices’ annual report. FBI managers in the field offices we visited and 87 percent of the special agents and 31 of the 34 analysts who completed our questionnaire indicated that FBI management had kept them informed of the FBI’s progress in revising its strategic plan to reflect changed priorities. FBI management seems to also have been effective in communicating the agency’s top 3 priorities (i.e., counterterrorism, counterintelligence, and cyber crime investigations) to the staff. In addition to the awareness of management staff in FBI headquarters and field offices, nearly all of the special agents and all of the analysts who answered our questionnaire indicated that FBI executive management (i.e., Director Mueller and Deputy Director Gebhardt) had communicated the FBI’s priorities to their field offices. Management and most of the agents we interviewed in the field were aware of the FBI’s top 3 priorities. Further, over 90 percent of special agents and 28 of the 34 analysts who completed our questionnaire generally or strongly agreed that their field office had made progress in realigning its goals to be consistent with the FBI’s transformation efforts and new priorities. Completion of a revised strategic plan is essential to guide the decision making in the FBI’s transformation. The Director has set the priorities and they have been communicated; however, it is vital that the FBI place a priority on the completion of a new and formal strategic plan, as it is a key first step in transformation. In my statement last June, I highlighted the importance of the development of a strategic human capital plan to the FBI’s transformation efforts. A strategic human capital plan should flow from the strategic plan and guide an agency to align its workforce needs, goals, and objectives with its mission-critical functions. Human capital planning should include both integrating human capital approaches in the development of the organizational plans and aligning the human capital programs with the program goals. The FBI has not completed a strategic human capital plan, but has taken some steps to address short-term human capital needs related to implementing its changed priorities, as well as made progress, through a variety of initiatives, to begin to link human capital needs with the FBI’s strategic needs. The FBI should continue to build a long-term strategic human capital approach, including maximizing use of human capital flexibilities, to identify future critical needs and to attract, retain, and develop individuals with these skills. The FBI has taken actions to address human capital concerns related to implementing its changed priorities. These include (1) initiating several reengineering projects on human capital issues, such as succession planning, enhancing the FBI’s communication strategy and streamlining its hiring process; (2) initiating the staffing of the Office of Intelligence, a key component of building the FBI’s intelligence mission; (3) realigning agents and support staff to counterterrorism, counterintelligence, and cyber crime investigations to address priority areas; and (4) implementing plans to enhance recruitment and hiring for critical skill needs and training staff shifted to priority areas to address the change in the FBI’s priorities. This statement further addresses the FBI’s progress in realigning staff resources to priority areas and efforts to enhance recruitment, hiring and training of personnel in the sections that follow. Additional efforts underway within the FBI to address future human capital needs include, among others: Administrative Services Division actions to recruit personnel with critical skills, as identified by the Counterterrorism, Counterintelligence, and Cyber Divisions to support their priority missions. Steps to identify key staff competencies and establish comprehensive career programs for all occupational groups in the FBI and plans to link these competencies to training and developmental needs. In support of the FBI’s intelligence mission, the creation of two new intelligence analyst positions, the reclassification of a third position, and plans to establish career paths for these positions. Re-engineering the Training Division’s mission and operations intended to meet the present and future training needs of the FBI workforce. In building a long-term approach, the FBI may want to focus on identified aspects of successful human capital management systems, such as utilizing existing human capital flexibilities. While the FBI has made use of several human capital flexibilities, including work-life programs, such as alternative work schedules and transit subsidies; monetary recruitment and retention incentives, such as recruitment bonuses and retention allowances; and incentive awards for notable job performance and contributions, such as cash and time-off awards, it needs to fully maximize the use of available human capital flexibilities in recruiting agents with critical skills, intelligence analysts, and other critically needed staff. The use of such flexibilities should be based on a data-driven assessment of the FBI’s specific needs and capabilities. Such an analysis should be outlined in the FBI’s strategic human capital plan. After fully maximizing the use of its recruiting flexibilities, if they prove to be inadequate in helping the FBI meet its recruiting and retention goals, the FBI may then want to seek additional legislative authority. Finally, as the FBI has yet to hire a Human Capital Officer to oversee these efforts, it is critical that this individual have the appropriate expertise in strategic human capital management, as well as the necessary resources to continue to develop and implement long-term strategic human capital initiatives. Options for which may include enhancing existing planning resources or contracting out these functions. A key element of the FBI’s reorganization and successful transformation is the realignment of resources to better ensure focus on the highest priorities. Since September 11, 2001, the FBI has permanently realigned some of its field agent workforce from criminal investigative programs to work counterterrorism, counterintelligence, and cyber programs. Additionally, over three-fourths of the new special agent positions in the FBI’s fiscal year 2004 budget request are for the priority areas. However, despite these efforts, the FBI continues to face major challenges in critical staffing areas. Some of the more noteworthy challenges include (1) a continuing need to utilize special agent and staff resources from other criminal investigative programs to address counterterrorism workload, (2) lack of adequate analytical and technical assistance, and (3) lack of adequate administrative and clerical support personnel. As figure 1 shows, about 26 percent of the FBI’s field agent positions were allocated to counterterrorism, counterintelligence, and cyber crime programs prior to the FBI’s change in priorities. Since that time, as a result of the staff reprogramming efforts and funding for additional special agent positions received through various appropriations, the FBI staffing levels allocated to the counterterrorism, counterintelligence, and cyber program areas have increased to about 36 percent. The FBI’s staff reprogramming plans, carried out over the last 12 months, have permanently shifted 674 field agent positions (about 7.5 percent of the 8,881 field agent positions existing before the change to new priorities) from the drug, white-collar, and violent crime program areas to counterterrorism and counterintelligence. In addition, the FBI established the Cyber program, which consolidated existing cyber resources. Despite the reprogramming of agent positions in fiscal year 2002 to counterterrorism and the additional agent positions received through various supplemental appropriations since September 11, 2001, agents from other program areas have also been continuously redirected to work temporarily on counterterrorism. This demonstrates a commitment on the part of the FBI to staff this priority area. The FBI has certain managerial flexibilities to temporarily redirect staff resources to address pressing needs and threats. As figure 2 shows, the average number of field agent workyears charged to investigating counterterrorism-related matters has continually outpaced the number of agent positions allocated to field offices for counterterrorism since September 11, 2001. The FBI’s current policy is that no counterterrorism leads will go unaddressed, resulting in a need for these shifts in resources. This policy results in substantial commitment of resources that may have to be reassessed in the future. As the FBI gains more experience and continues assessing risk in a post September 11, 2001, environment, it will gain more expertise in deciding which matters warrant investigation and the investment of staff resources. To better manage the investment of its staff resources in the future, the FBI should systematically analyzing the nature of leads and the output of their efforts. This will enable the FBI to better pinpoint how best to invest staff resources based on value/risk and overall resource considerations in the future. Use of field agent staff resources for three of the four other programs we included in our review (i.e., drug enforcement, violent crime, and white collar crime) were below their allocated staffing levels. Appendix I provides comparative analyses of field agent positions allocated to field offices for these other criminal programs and the average number of field agent workyears charged to investigating these matters. Last year, we testified that neither the FBI nor we were in a position to determine the right amount of staff resources needed to address the priority areas. Since that time, the FBI has completed a counterterrorism threat assessment and has had some experience in staffing priority work in a post-September 11, 2001, environment. This, along with an analysis of the nature of leads and the output from them, may put the Bureau in a better situation to assess the actual levels of need in counterterrorism, counterintelligence, and cyber programs. The level of effort in counterterrorism is further reflected in the number of counterterrorism matters that have been opened following September 11, 2001. As figure 3 shows, the number of newly opened counterterrorism matters has increased substantially. Previous internal and external studies of the FBI and our recent visits to 14 FBI field offices have identified a lack of adequate support personnel. Among the critical support personnel needs identified were intelligence analysts, foreign language specialists, computer engineering and technical specialists, and administrative and clerical support. Based on information obtained during our site visits to FBI field offices and discussions with officials in the FBI headquarters, there continues to be challenges associated with meeting resource needs in these areas. During our site visits, both management officials and field agents indicated that inadequate numbers of intelligence analysts and foreign language specialists resulted in delays to investigative work. Specifically, 70 percent of the agents and 29 of the 34 analysts who completed our questionnaire responded that the staffing level of intelligence analysts was less than adequate given their office’s current workload and priorities. As a result, many agents said they spend time performing their own intelligence analysis work. FBI officials also expressed a need for more foreign language specialists largely due to an increase in translation needs, for instance, translating documents and electronic surveillance recordings. Fifty-four percent of the agents and 17 of the 32 analysts who completed our questionnaire indicated that the staffing level of foreign language specialists was less than adequate given their office’s current workload and priorities. Also, agents expressed a need for additional computer and technical specialists. Fifty three percent of the agents and 21 of the 34 analysts who completed our questionnaire indicated that staffing level of computer and technical support was less than adequate given their office’s current workload and priorities. Agents reported that they sometimes have to wait for several days to get computer hardware support when needed. Additionally, managers and agents in the field offices said that their field office lacked adequate access to staff who could assist in the search and seizure of computer evidence as well as provide forensic examination of computers. Lastly, FBI management and special agents with whom we met indicated that the staffing level of administrative and clerical support personnel was inadequate and that this adversely affected the efficiency of their investigative activities. Over 60 percent of the agents and 18 of the 34 analysts who completed our questionnaire indicated that the level was less than adequate given their office’s current workload and priorities. According to FBI field office officials, it was not uncommon for management, agents, and analysts to take on many of the administrative support functions, such as answering telephones and entering data, in addition to their other responsibilities. Last year at this time the FBI announced that, in keeping with its new priorities, it would move 400 field agent positions from its drug program to counterterrorism. Indeed, the FBI has transferred even more agent positions than it originally announced and has augmented those agents with the short-term assignment of additional field agents from drug and other law enforcement areas to work on counterterrorism. As would be expected, the number of newly opened drug cases has fallen in relation to the decline in the number of field agent positions allocated to drug enforcement. Additionally, according to the FBI and DOJ’s recent domestic drug enforcement strategy, the FBI’s, as well as DEA’s, drug enforcement efforts will primarily focus on targeting the most significant high-level drug trafficking organizations leaving some other lower-level drug enforcement activities (e.g., street sweeps) to state and local entities. It is unclear the extent to which state and local law enforcement agencies can sustain or enhance their drug enforcement efforts given that they also have added homeland security responsibilities and face their own fiscal challenges. Since September 11, 2001, about 40 percent of the positions allocated to FBI field offices’ drug program have been reallocated to counterterrorism and counterintelligence priority areas. As figure 4 shows, just prior to September 11, 2001, about two-thirds (or 890) of the 1,378 special agent positions allocated to FBI field offices for drug program matters were direct-funded. The remaining one-third (or 488) of the special agent positions was funded by the Organized Crime and Drug Enforcement Task Force program (OCDETF). As of the second quarter of fiscal year 2003, the number of direct-funded positions allocated to FBI field offices for the drug program had decreased over 60 percent, going from 890 to 335. OCDETF-funded agent positions, which have remained constant, now account for about 60 percent of the FBI field offices’ drug program staff resources. Consistent with Director Mueller’s commitment, the FBI has not reduced the number of agents in the OCDETF program. While this reduction represents a substantial decline in the number of field agent positions allocated to drug work, in fact, the reduction in drug enforcement workyears was actually larger than these figures reflect. Specifically, as needs arose for additional agents to work counterterrorism leads, field agents assigned to drug program squads were temporarily reassigned to the priority work. As figure 5 shows, at the extreme, during the first quarter of fiscal year 2002 (just after the events of September 11, 2001), while 1,378 special agent positions were allocated to drug work only about half of these staff resources worked in the drug program area. During fiscal year 2003, the allocated number of drug agent positions and the average number of field agent workyears charged to drug matters start to converge to the new targeted levels. The reduction in drug enforcement resources has reduced the number of drug squads in FBI field offices, according to FBI officials. The number of FBI agents supporting the High-Intensity Drug Trafficking Area (HIDTA) program initiatives has also been reduced, according to FBI officials. The significant reduction in agent-strength in the drug enforcement area may be an important factor in the smaller number of drug matters opened in the first two quarters of fiscal year 2003. As figure 6 shows, the number of newly opened drug matters went from 1,825 in fiscal year 2000 to 944 in fiscal year 2002 and to 310 in the first half of fiscal year 2003, indicating a rate for the entire year that may be well below that of previous years. We want to make clear that we are in no way intending to fault the FBI for the reassignment of agents from drug enforcement to higher-priority areas. Indeed, these moves are directly in line with their priorities and in keeping with the paramount need to prevent terrorism. The DEA, the lead federal drug enforcement agency, has taken a slightly larger role in domestic drug enforcement through increasing its participation in interagency drug enforcement activities. For example, in fiscal year 2002, DEA began shifting 34 agent positions from headquarters and various field divisions to support the southwest border—a region that has experienced a significant reduction in FBI special agent positions. During the same period, the DEA also increased its authorized staffing level for HIDTA programs by 13 special agent positions. For fiscal year 2003, DEA received a budget enhancement that will fund an additional 216 special agent positions, to among other things, strengthen its financial investigations and increase its participation in OCDETF. For fiscal year 2004, DEA has requested an enhancement to fund 233 additional agent positions, plus the reassignment of 293 special agent positions from their Mobile Enforcement Team (MET) and Regional Enforcement Team (RET) to investigate priority drug trafficking organizations. Overall, in terms of combined DEA and FBI drug agent positions, DEA enhancements (received and planned) will fill some, but not all, of the drug program personnel gap left by the reassignment of FBI drug program agents to higher-priority work. According to the April 2003 Department of Justice Domestic Drug Enforcement Strategy, DOJ’s drug enforcement effort, consistent with the OCDETF initiative, will center on investigations of the most significant international, national, regional, and local drug trafficking organizations. Specifically, it focuses drug enforcement efforts on disrupting or dismantling priority targets on its Consolidated Priority Organization Target list. The proposed movement of resources out of DEA’s MET and RET program is consistent with this new strategy. In July 2001, we issued a report concerning the management of the MET program. At that time we reported that, according to DEA, the MET program was needed because (1) state and local police agencies did not have sufficient resources to effectively enforce the drug laws and (2) local law enforcement personnel were known to local drug users and sellers, making undercover drug buys and penetration of local distribution rings difficult and dangerous. DEA reported about 16,000 arrests as a result of MET deployments from its inception in fiscal year 1995 through the third quarter of fiscal year 2003. DEA also noted that about a quarter of its MET investigations involved either drug traffickers operating on a broader scale than the local jurisdiction of the deployment of international traffickers. The overall reduction in combined FBI and DEA staffing of drug enforcement positions and the change in strategy removes some drug enforcement assistance from local jurisdictions at a time when many, if not most, state and local budgets are under intense pressure. While this may in fact be the best use of scarce resources, drug crime data of many kinds should be monitored closely to assess the impact of these changes and ensure that we are using our resources to the best advantage. The FBI has made some progress in developing and implementing its recruitment strategies and in its efforts to hire special agents and support staff with critical skills. While fiscal year 2002 special agent hiring goals were met in terms of numbers, the FBI fell short of the desired critical skills mix. For support staff, hiring for that year was far lower than was targeted. For fiscal year 2003, as of May, the outlook is better for both special agents and some support staff skill areas. For special agents, only in the language skills area has hiring lagged below a pace needed to meet the goal. Support staff hiring seems on track to meet many, but not all, of their critical skill targets. As previously noted, in order to recruit staff to align with its needs and priorities after September 11, 2001, the FBI developed a National Special Agent Recruitment Plan for fiscal years 2002 and 2003. This plan established recruitment and hiring goals, identified critical skills the FBI is targeting, and established a timeline for achieving these goals. To implement its recruitment plan, in January 2002, the FBI began a hiring initiative aimed at recruiting applicants with skills and backgrounds identified as critical for new special agents. This includes a focus on skills in computer science, specific foreign languages, physical sciences and engineering, as well as experience in counterterrorism and counterintelligence. The FBI has set specific numerical targets for these skills to try to ensure that new agents as a group would be hired with the targeted mix of skills. To enhance the special agent applicant pool in certain critical skill areas, for example, the FBI established a Computer Science/Information Technology Special Entry Program. The FBI was successful in meeting its overall hiring goals for special agents during fiscal year 2002. During that year, the FBI hired 923 agents of the 927 planned. The FBI, however, was less successful in hiring the special agents, who as a group possessed the mix of critical skills specified under the fiscal year 2002 hiring initiative. The timing of this hiring process may have been a factor in not achieving the targeted skill mix during this year. The FBI announced its critical skill goals approximately 4 months after September 11, 2001, and at the end of a 2-year hiring freeze. In order to hire special agents quickly, in the months following September 11, 2001, the FBI had to rely on its existing applicant pool, which largely consisted of applicants with skills in accounting, law, and law enforcement. The available applicant pool also included applicants with foreign language skills, but not necessarily in the newly targeted languages. During the first 8 months of fiscal year 2003, the FBI hired about 80 percent (or 550) of the special agents it needs to meet its hiring goal of 663 agents. In all of its identified critical skill areas, except agents with foreign language skills, the FBI is on track to reach its stated hiring goals, and in some areas has exceeded its goals. Appendix II contains additional information concerning the FBI’s fiscal year 2002 and 2003 hiring. It is important to note that the FBI hiring process for special agents has been shortened considerably. While still lengthy, it is down to a minimum of about 8 months from application submission to final processing, from 13 months several years ago. Appendix III includes a graphic presentation of the steps in the hiring process and the time associated with each step. Once new agents are hired, they are sent to 17 weeks of new agent training at the FBI Academy in Quantico, Virginia, followed by a 2-year probationary period during which time special agents receive developmental supervision and on-the-job training. We note this to make the point that it will take time to build up agent strength within the Bureau. About 60 percent of the FBI’s workforce is represented by support staff, which consists of analysts (e.g., intelligence and financial), scientists, technical specialists, administrative support, laborers, and other nonagent personnel. In fiscal year 2002, the FBI did not meet its overall goal for hiring support staff, filling only 643 (44 percent) of 1,465 positions. The initial goal for hiring support staff in fiscal year 2003 was set at about 2,000. However, the goal has been revised downward during the year to reflect attrition rates that were lower than anticipated, somewhat smaller enhancements for support staff than were anticipated, and a reevaluation of their overall budget situation. The capacity of the FBI to process new support staff applications was approximately 1,500 applications per year, according to FBI officials. The current target for support staff hiring is set at 1,023. As of May 2003, the FBI has hired 565 support staff, about 55 percent of the goal as compared to 80 percent of its special agent goal. The FBI does not set hiring goals for all types of support staff but only for those that are deemed critical. Table 1 shows fiscal year 2003 hiring goals for selected support staff positions. As the table shows, the FBI is doing well in hiring for some critical areas but is lagging in others. Consistent with Director Mueller’s plans to enhance its intelligence program, the FBI has, as noted earlier, redefined and revised intelligence- related analyst positions and has made some progress in hiring intelligence analysts. In fiscal year 2002, the FBI did not specify hiring goals in the intelligence area; however, in fiscal year 2003, the FBI identified intelligence analysts as a priority hiring category. As of May 2003, the FBI has hired 115 new analysts in the intelligence area— including intelligence analysts, intelligence operations specialists, and intelligence research specialists. On the basis of its revised fiscal year 2003 target—to hire 126 analysts in this area—the FBI is well on the way to reaching its target. While still short of meeting its foreign language critical skill targets, the FBI has been able to bolster its foreign language capacity by increasing the number of contract linguists and language specialists. Before September 11, 2001, there were 405 contract linguists and 379 language specialists, and as of May 2003, there were 712 contract linguists and 421 language specialists. In the priority languages identified to support the FBI’s new priorities, 195 contract linguists and 44 language specialists were hired between October 2002 and March 2003. Through our field visits, two other areas in which agents and managers indicated that there were support staff challenges were information technology and administrative support. For fiscal year 2003, the FBI plans on hiring 44 information technology staff and 211 administrative staff. As of May 2003, the FBI hired 45 information technology and 94 administrative personnel—exceeding its goal for information technology and hiring about 45 percent of its goal for administrative personnel. In addition to hiring new employees with critical skills, the FBI’s reorganization plans called for revisions to the FBI’s training program. Over the past 12 months, the FBI has improved its ability to train its workforce and to address priority areas. Encouraging steps taken by the FBI include: (1) efforts to provide revised training to new agents and agents assigned to work in priority areas; (2) progress establishing the College of Analytical Studies to train analysts; and (3) plans to reengineer its overall training program to better meet the long-term training needs of the Bureau’s workforce. In January 2003, in an effort to focus on the delivery of training to agents and analysts reassigned to work in the priority areas, the FBI cancelled most of its training for on-board staff that was not focused on counterterrorism, counterintelligence, and cyber crime investigations. This allowed the FBI to shift resources to develop training for new agents and those agents who were moved to work in counterterrorism, counterintelligence, and cyber matters. For example, the FBI Training Division revised existing new agent coursework to focus on the priority areas and developed new courses for agents who were assigned to counterterrorism and counterintelligence. Agents assigned to the newly established Cyber Division are required to complete basic coursework on cyber crime investigations and are encouraged to complete a core curriculum consisting of eight classes, including technical coursework as well as cyber investigative techniques. As of April 2003, all new agents are to receive revised training in the priority areas. In addition, as of May 2003, 545 of all agents assigned to work on counterterrorism and counterintelligence investigations have received revised training in these areas. Those agents who have been designated by the Counterterrorism and Counterintelligence Divisions as needing revised training will have completed the required training by the end of the 2003 calendar year, according to FBI officials. We did not evaluate the curriculum of the revised training courses. Appendix IV provides additional details about the FBI’s allocation of $10 million provided in the House Conference Report accompanying the fiscal year 2003 budget and revisions to the FBI’s training in priority areas. To further enhance analysts’ skills and abilities, the FBI created the College of Analytical Studies at its Quantico training facility in October 2001. The College of Analytical Studies provides training to new and in- service analysts in tools and techniques for both strategic and technical analysis. Completion of basic analytical coursework is required of new analysts, while advanced analytical coursework is offered to experienced analysts. The College of Analytical Studies trained 193 analysts in fiscal year 2002 and is scheduled to train an additional 1,032 analysts in fiscal year 2003. Additionally, the FBI is continuing to identify and schedule additional analysts from the priority areas who should receive analytical training, according to FBI officials. As with the revised agent training, we did not evaluate the content of the curriculum offered by the College of Analytical Studies. FBI officials told us that after each training course students are asked to provide feedback, which may be used to revise coursework. We did not evaluate this feedback. Additionally, the FBI’s Office of Intelligence has been tasked to develop all policies, including education requirements, with regard to analysts working in the intelligence area. The Office of Intelligence intends to work with the College of Analytical Studies to ensure that appropriate analytical training has been provided, according to FBI officials. The FBI is also pursuing accreditation for its College of Analytical Studies. The FBI continues to work with other federal agencies to improve their analytical capabilities. For example, the FBI is currently working with the Joint Military Intelligence College to allow a select number of FBI personnel with intelligence backgrounds to earn a Master of Science in Strategic Intelligence. FBI officials anticipate that the program will begin accepting applications from interested FBI personnel by the end of fiscal year 2003, for consideration by FBI executives and final acceptance by the Joint Military Intelligence College for classes in fiscal year 2004. To better address the longer-term training needs of its entire workforce, the FBI is implementing a plan to restructure its training programs. In March 2003, Director Mueller approved a series of proposals contained in a reengineering project addressing FBI training activities, which included a goal of establishing an Office of Training and Development. This office, among other duties, would assess the career-long training needs of all employees, standardize training, and centralize the tracking of staff progress through the curriculum. The training reengineering plan calls for the Assistant Director of Training to function as the chief learning officer and to oversee both the Office of Training and Development and the FBI Academy. The FBI Academy will continue its primary mission of training new agents, as well as operating the College of Analytical Studies. While the FBI, in announcing its training reengineering plan, acknowledges the long-term benefits of enhancing training as an investment in human capital, it is too soon to tell how effective the plan will be in improving performance. And, as the overall human capital plan for the agency develops there will be a need to revise and enhance training plans. Appendix IV also provides additional details on the FBI’s training reengineering plan. The revised Attorney General’s Guidelines on General Crimes, Racketeering Enterprise and Terrorism Enterprise Investigations (the “Guidelines”) are intended to provide the FBI greater investigative flexibility to enhance its ability to detect and prevent terrorist acts and other federal crimes. As traditional investigative constraints are eased, however, appropriate internal controls are needed to prevent investigative abuses and ensure the protection of civil liberties. The Guidelines themselves contain internal controls regarding specific investigative procedures and prohibited activities, and the FBI and DOJ have other internal control mechanisms in place to help ensure agents do not go beyond their stated authorities. Although private sector groups we interviewed have expressed concern regarding issuance of the new Guidelines, neither we nor they have identified any reported allegations or investigations of abuses under the new Guidelines authorities. It should be noted that federal officials, including the FBI, have also received additional investigative authorities from laws such as the USA PATRIOT Act, and that FBI activities are also prescribed by various other Attorney General guidelines. Our review focused on certain provisions of the Attorney General’s Guidelines on General Crimes, Racketeering Enterprise and Terrorism Enterprise Investigations. Among other things, the revised Guidelines permit FBI agents to be more proactive by allowing certain investigative activities—such as visiting public places and events or conducting online searches—to be conducted outside the context of an investigation. We did not focus on internal controls associated with other statutes and guidelines relevant to FBI investigations. For example, we did not focus on the type of alleged abuses recently reported by the DOJ’s Office of the Inspector General (OIG) in June 2003 concerning the detention of 762 aliens who had been held in connection with the FBI terrorism investigations. Appendix V provides a brief overview of a few selected statutes and guidelines relevant to FBI investigations that were not a part of our analysis. Following the September 11, 2001, terrorist attacks on the United States, the Attorney General ordered a review of all investigative procedures related to national security and criminal matters in an effort to eliminate unnecessary investigative constraints and help prevent terrorism. As a result, in May 2002, the Attorney General issued a revised set of FBI domestic investigative guidelines—The Attorney General’s Guidelines on General Crimes, Racketeering Enterprise and Terrorism Enterprise Investigations—intended to provide consistent policy direction so that FBI investigations are confined to matters of legitimate law enforcement interest and protect individual rights, while also providing new investigative flexibility. The Guidelines also delegate the authority to initiate and approve certain types of investigations from FBI headquarters to FBI field offices. Appendix VI presents more details on selected key changes in the Guidelines. As we pointed out a year ago, the FBI should have appropriate internal controls in place to ensure that the new authorities permitted under the revised Guidelines are carried out in a manner that protects individual civil liberties. Internal controls serve as the first line of defense in preventing and detecting errors, and they provide an organization’s management with reasonable assurance of compliance with applicable laws and regulations. Thus, internal controls are a key component for ensuring that these new authorities are implemented in a manner that protects civil liberties. Under federal internal control standards, a variety of internal control mechanisms—including training, supervision, and monitoring—may be used by agencies to ensure compliance with applicable laws and regulations. The Guidelines themselves are an internal control, establishing standards and requirements governing the FBI’s investigative authority. In addition, the FBI has the following additional internal controls in place to help ensure compliance with the Guidelines and prevent agents from going beyond the authorities granted in the Guidelines: (1) policies and procedures, which communicate to agents in detail the levels of authority and permissible activities; (2) training, which addresses civil liberties issues so that agents understand the limitations of their authority; and (3) supervision, which monitors agents’ use of the new authorities. Finally, the FBI and DOJ have other internal control mechanisms in place to monitor FBI programs and personnel, as well as to identify and address alleged incidents of agent misconduct or abuse of civil liberties— specifically the FBI’s internal inspection process and the investigation of allegations of abuse by the FBI ‘s Office of Professional Responsibility (OPR) and DOJ’s OIG. All of these mechanisms, of course, predate the revised Guidelines. To protect against civil liberties abuses in relation to the new investigative authorities allowed by the revisions, these controls must incorporate the revisions into their implementation. In reviewing the key changes in the revised Guidelines, we looked for evidence of internal controls in the document itself to help ensure compliance and protect against potential civil liberties abuses. In some cases, the Guidelines revisions include very specific internal controls intended to ensure compliance. For example, the changes relating to the process for conducting preliminary inquiries and terrorism investigations specify criteria for authorizing the activity, who is authorized to approve the activity, how long the activity may remain initially authorized until reapproval is required, and what notifications of the activity are required within and outside the FBI. On the other hand, changes related to the new investigative authorities are not as specific in terms of controls to ensure compliance. For example: The FBI is now authorized to operate and participate in counterterrorism information systems (such as the Foreign Terrorist Tracking Task Force), and a periodic compliance review is required on any systems operated by the FBI. However, there is no indication of when such reviews should be conducted, what the review should entail (e.g., issues relating to access, use, or retention of data), and whether any reviews are required if the systems are not operated by the FBI. The FBI is now authorized to visit public places or events, but retention of information from these visits is prohibited unless it relates to potential criminal or terrorist activity. However, there is no indication of whether or how agents are to document the activity, how supervisors are to ensure that the purpose of the activity is detecting or preventing terrorism, and how compliance with the prohibition on maintaining information is to be verified. To implement the Guidelines themselves, the FBI and DOJ have other internal control mechanisms in place to help ensure FBI compliance with the Guidelines and help protect against potential abuses of individual civil liberties. Specifically: Policies and procedures – The FBI’s policies and procedures manuals provide agents with additional guidance on conducting investigations. About 75 percent of the field agents who completed our questionnaire considered themselves to be at least somewhat familiar with the Guidelines. These agents indicated their familiarity came from a variety of sources, including a hard copy version of the Guidelines, the FBI’s intranet Web site, electronic communications and briefings from FBI management, FBI program division or field office training, and supervisory on-the-job training. Additionally, the FBI is in the process of updating its Manual of Investigative Operations and Guidelines (MIOG) policies and procedures manuals to provide agents with additional guidance on implementation of the Guidelines. Training – Training on the Guidelines is included in all new agent training provided at the FBI Academy. Additional training and guidance, coordinated through the FBI’s Office of General Counsel and field office legal coordinators, was made available to on-board agents after the Guidelines were issued. As of April 2003, just over one-half (about 55 percent) of the field agents who completed our questionnaire indicated they had received either formal or informal training on the Guidelines. Supervision – Supervisory agents are to perform periodic case file reviews on all cases being worked by their agents to, among other things, monitor the progress of cases and verify compliance with applicable policies and procedures, such as the Guidelines. As of April 2003, nearly all the field agents who completed our questionnaire indicated that their supervisors performed case file reviews at least every 90 days—more often in some cases. Inspections – FBI inspectors are to verify agents’ compliance with the Guidelines and other applicable policies and procedures by reviewing case files and supervisory case file reviews. In reviewing selected inspection reports completed since October 1999, we found evidence that such reviews were being performed. At the same time, we identified no findings in the inspection reports of noncompliance with or misuse of the new investigative authorities granted under the Guidelines. Allegations of abuse – Both the FBI’s OPR and DOJ’s OIG have the authority to investigate allegations of FBI misconduct; the OIG also reviews all incoming FBI allegations to ensure the appropriate investigative response. Between October 2000 and March 2003, OPR investigated 1,579 cases of alleged FBI misconduct. The OIG investigated another 85 cases of alleged misconduct and 35 cases of alleged civil rights abuses between July 2001 and February 2003. However, based on the descriptions of the alleged offenses, we found no allegations or investigations that appeared to involve noncompliance with or abuse of the new investigative authorities granted under the Guidelines. In June 2003, the OIG reported on allegations of mistreatment and abuse of aliens detained on immigration charges in the aftermath of the September 11, 2001, terrorist attacks. These allegations did not relate to the FBI’s use of investigative authorities under the revised Guidelines and, in fact, the vast majority of these aliens were detained before the Guidelines were issued. When the revised Guidelines were issued, private sector groups raised concerns about what they saw as a relaxing of investigative controls over the FBI, which represented a potential threat to individual civil liberties. In particular, they noted that the revised Guidelines allowed the FBI to use its new investigative authorities even in the absence of any prior indication of criminal activity. However, the private sector officials we met with could not provide any specific examples of the FBI abusing the new authorities granted under the Guidelines. Rather, their concerns largely stemmed from the belief that granting the FBI broader investigative authorities ignores the lessons of past abuses and is unlikely to result in tangible gains to law enforcement. Officials from the FBI’s OPR and DOJ’s OIG told us they do not separately track allegations of noncompliance with the Guidelines; nor could they identify any specific cases that involved noncompliance with or abuse of the new investigative authorities granted under the Guidelines. FBI headquarters officials indicated that the supervisory case file review process is the primary vehicle to ensure that agents comply with applicable policies and procedures—such as the Guidelines—and do not go beyond their stated authorities. Regarding the new authorities, FBI field office managers told us that the number of leads that require followup, plus the number of ongoing preliminary inquiries and investigations related to counterterrorism, have field agents fully engaged. This, according to FBI field office managers, does not afford agents time to visit public places and events or search the Internet absent a legitimate lead. A recent FBI informal survey of 45 of its field offices found that fewer than ten offices had conducted investigative activities at mosques since September 11, 2001. All but one of these visits was conducted pursuant to, or was related to, open preliminary inquiries or full investigations. Notwithstanding this, however, FBI headquarters officials are currently considering whether to require mandatory supervisory approval prior to allowing an agent to enter a public place or attend a public meeting. Given the sensitivity of these issues and the FBI’s history of investigative abuses, the FBI has been reaching out to communities to assure them that, despite the emphasis on counterterrorism, investigating civil rights abuses remains a high priority of the FBI. For example, FBI field offices have been contacting Muslim leaders for the purpose of establishing a dialogue and discussing procedures for alerting the FBI to civil rights abuses. In one field office we visited, discussions had recently been held with the Muslim community and its leaders covering topics related to homeland security, FBI employment, and community outreach. Throughout the FBI, over 500 such meetings occurred in the first 5 months after September 11, 2001. More recently, in February 2003, the FBI Director met with key leaders of national Arab-American, Muslim, and Sikh organizations to discuss the FBI’s response to hate crimes and other civil rights issues. The revised Guidelines are in their infancy in terms of implementation. While it is a good sign that we have not identified any reported allegations, investigations, or indications of abuse of the new investigative authorities, this is not a situation that should result in reduced vigilance on the part of DOJ or the Congress. Appendix VII presents more details about the internal controls discussed above. We continue to be ready to assist this and other congressional committees in any oversight of the FBI’s implementation of its transformation efforts. Based on our work, there are specific areas related to the transformation of the FBI that seem to warrant continued monitoring. These areas include (1) the FBI’s completion and implementation of a revised strategic plan; (2) the FBI’s progress in integrating a human capital approach consistent with its mission and goals; (3) the long term impact on state and local law enforcement agencies, and the public, of the FBI’s shift of staff resources away from drug enforcement and other criminal programs; and (4) FBI agents’ compliance with the new investigative authorities granted under the revised Attorney General’s Guidelines. In closing, I would like to thank the FBI Director, DEA Administrator, and their staff for their cooperation in providing documentation and scheduling meetings needed to conduct our work. Especially, I would like to note the cooperation and candidness of FBI officials—managers, agents, and analysts—during our site visits to 14 field office locations. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions you and the Subcommittee members may have. For further information about this statement, please contact Laurie E. Ekstrand, Director, Homeland Security and Justice Issues, on (202) 512-8777 or at ekstrandl@gao.gov or Charles Michael Johnson, Assistant Director, Homeland Security and Justice, on (202) 512-7331 or at johnsoncm@gao.gov. For further information on governmentwide human capital or transformation issues, please contact J. Christopher Mihm, Director, Strategic Issues, on (202) 512-6806 or at mihmj@gao.gov. Major contributors to this testimony included David Alexander, Tida E. Barakat, Karen Burke, Chan My J. Battcher, Gary A. Bianchi, Nancy Briggs, Philip D. Caramia, Sue Conlon, Seth Dykes, Geoffrey Hamilton, Mary Catherine Hult, Lori Kmetz, E. Anne Laffoon, Ronald La Due Lake, Julio Luna, Jan Montgomery, Kay Muse, and Andrew O’Connell, Sarah E. Veale. As shown in figures 7 through 9, use of field agent workyears expended for the cyber crime, violent crime, and white-collar crime program areas were at or below their allocated staffing levels. As shown in table 2, the FBI did not fully achieve its goal for the mix of critical skills for fiscal year 2002. In fiscal year 2003, as shown in table 3, in all of its identified critical skill areas, except agents with foreign language skills, the FBI has already achieved over half of its stated goals for those areas. As shown in figure 10, the FBI reduced the minimum time it takes to hire a special agent from 379 days to 236 days. The Conference report for the Department of Justice Appropriation Act, 2003 (P.L. 108-7, 117 Stat. 49 (2003)) indicates that the Conferees provided $10 million above the FBI’s budget request for training needs. Table 4 shows how the FBI plans to allocate these funds by program. The FBI has taken steps to provide revised training to FBI personnel assigned to the priority areas. Table 5 summarizes specific revisions to the training programs offered to new agents in the priority areas, agents assigned to priority areas, other agents involved in counterterrorism work, and analysts. The FBI’s training programs in the priority areas, as of June 1, 2003, are summarized in table 6. The FBI has begun to implement a plan to restructure its training program. As reflected in figure 11, the plan established several units to establish curriculum, develop courses and tools, and deliver training for all FBI personnel, special agents, as well as support staff. To provide the intelligence community and law enforcement with additional means to fight terrorism and prevent future terrorist attacks, Congress enacted a wide range of investigative enhancements in the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism (USA PATRIOT) Act. Among other things, the USA PATRIOT Act provides federal officials with enhanced surveillance authorities to intercept wire, oral, and electronic communications relating to terrorism. The act also provides the authority to seize voice-mail messages pursuant to warrants. The act further contains a number of provisions authorizing information sharing between intelligence and law enforcement agencies—such as the sharing of foreign intelligence information obtained as part of a criminal investigation with any federal law enforcement, intelligence, protective, immigration, national defense, or national security official in order to assist the official in the performance of his or her official duties. The USA PATRIOT Act also seeks to enhance federal law enforcement agency abilities to, for example, investigate and combat financial-related crimes by adding new money laundering and counterfeiting crimes and by increasing related criminal penalties. The USA PATRIOT Act further seeks to strengthen federal criminal laws against terrorism by, for example, making it a crime to engage in terrorist attacks or other acts of violence against mass transportation systems. The act also made it a crime to harbor or conceal terrorists where a person knows, or has reasonable grounds to believe, that the person harbored or concealed has committed or is about to commit a specified terrorism-related offense. The Foreign Intelligence Surveillance Act of 1978, (FISA) as amended, established legal standards and a process that the Attorney General, including the FBI, must use to obtain authorization for electronic surveillance and physical searches when seeking foreign intelligence and counterintelligence information within the United States. FISA also created a special court—the Foreign Intelligence Surveillance Court—with jurisdiction to hear applications for and grant orders approving FISA surveillance and searches. FISA orders may be issued, in general, upon a FISA Court finding of probable cause to believe that a suspect target is a foreign power or an agent of a foreign power, and that the places at which the surveillance is directed are being used, or are about to be used, by such targets. The USA PATRIOT Act amended various FISA provisions to authorize, for example, roving surveillance under FISA to, in essence, follow a person who uses multiple communication devices or locations, where the FISA court finds that the actions of the target may have the effect of thwarting the identification of a specified person. Another amendment allows senior level FBI personnel, in certain circumstances involving international terrorism or clandestine intelligence, to apply to the FISA Court for an order for the production of tangible items—such as books, records, papers, or documents. When conducting investigations, the FBI is subject to various sets of guidelines established by the Attorney General. The Attorney General’s Guidelines on General Crimes, Racketeering Enterprise and Terrorism Enterprise Investigations provide general standards and procedures for the FBI’s conduct of criminal investigations. They are designed to govern the circumstances under which such investigations may be begun, the permissible scope, duration, subject matters, and objectives of such investigations. Under these guidelines, for example, the FBI may conduct investigations when the facts and circumstances reasonably indicate that a federal crime had been, is being, or will be committed. Preliminary inquiries may be performed when there is not yet a reasonable indication of criminal activities but where information requires further scrutiny beyond a prompt and limited checking of initial leads. The Attorney General has also issued a separate set of guidelines prescribing the FBI’s investigative authority related to international terrorism—that is, terrorist activities occurring totally outside the United States or which transcend national boundaries. The Attorney General Guidelines for FBI Foreign Intelligence Collection and Foreign Counterintelligence Investigations (significant portions of which are classified) govern all foreign intelligence, foreign counterintelligence, foreign intelligence support activities, and intelligence investigations of international terrorism. These guidelines also apply to FBI investigation of espionage statutes and investigations on behalf of, or in cooperation with, foreign governments. Table 7 presents a side-by-side comparison of the key changes in the 2002 Guidelines, as compared with the most recent previous version of the Guidelines, which were issued in 1989 (and amended slightly in 1994). The following sections present more detail about (1) the extent to which internal controls have been incorporated into the Attorney General’s Guidelines on General Crimes, Racketeering Enterprise and Terrorism Enterprise Investigations, (2) other internal control mechanisms that are in place to ensure FBI compliance with the Guidelines, and (3) concerns about how the Guidelines may adversely affect the protection of civil liberties. The Guidelines themselves are an internal control—establishing the Attorney General’s parameters for the FBI’s investigative authority. For example, the internal controls described in table 8 are designed to ensure that only valid, authorized transactions and events—in this case, investigative activities such as preliminary inquiries and terrorism enterprise investigations—are initiated or entered into by the FBI. These controls specify who is authorized to approve the activity, how long the activity may remain authorized until reapproval is required, and what notifications of the activity are required within and outside the FBI, thereby facilitating the verification of compliance. Similarly, the controls described in table 9 are also designed to ensure that only valid, authorized transactions and events are initiated or entered into by the FBI—in this case, investigative techniques, including the new counterterrorism authorities granted under the revised Guidelines. Regarding counterterrorism activities and other authorizations as identified in table .9 above, the controls associated with these authorities are less specific when compared with those associated with the initiation and renewal of preliminary inquiries and terrorism enterprise investigations, as described in table 7. For example: Regarding the FBI’s authorization to operate and participate in counterterrorism information systems, there is no indication of how agents are to document this activity, nor how supervisors are to ensure that the purpose of the activity is detecting or preventing terrorism. Further, there is no indication of when such systems should be reviewed, what these reviews should entail (e.g., verifying compliance with access, use, or data retention requirements), and whether any such reviews are required if systems accessed are not operated by the FBI. Regarding the FBI’s authorization to visit public places or events, there is no indication of how agents are to document the activity, how supervisors are to ensure that the purpose of the activity is detecting or preventing terrorism, and how compliance with the prohibition on maintaining information is to be verified. FBI headquarters officials said that agents are not required to obtain supervisory approval before accessing terrorism information systems, but they are encouraged to seek legal guidance to ensure they comply with applicable guidelines. Also, the process of creating such systems involves reviews for compliance with the Privacy Act and other applicable regulations, and any data that are collected, used, or disseminated are subject to Privacy Act restrictions. Regarding visiting public places and events, agents should obtain prior supervisory approval, if time permits, and the date, time, and place of the visit should always be noted in the case file. For either of these new authorities, the FBI’s supervisory case file review process is the primary vehicle to ensure that agents comply with the Guidelines and do not go beyond their stated authorities. Regarding policies and procedures, FBI headquarters officials told us that guidance such as that contained in the Guidelines is to be incorporated into the FBI’s investigative and administrative manuals on a regular basis. Consistent with this practice, the FBI is in the process of completing revisions to its Manual of Investigative Operations and Guidelines (MIOG) policies and procedures manuals to incorporate guidance on the implementation of Guidelines. Training on the Guidelines is included in all new agent training provided at the FBI Academy. In addition, on-board agents received training on the Guidelines through the FBI’s Office of General Counsel, in the form of direct guidance provided to each field office, various in-service training presentations, and as part of basic training provided to agents being transferred to counterterrorism from other program areas. The field office Chief Division Counsels also received Guidelines training, and they told us this training was subsequently provided to agents in their field offices during periodic legal updates. We found that about 55 percent of the field agents who completed our questionnaire in April 2003 indicated that they had received training relating to the Guidelines—but the majority of that was on-the-job training. The FBI’s training program was recently re- engineered to, among other things, update the new agent and in-service training curriculum to better address the FBI’s shift in resources from criminal programs to priority areas, such as counterterrorism. Training on the Guidelines continues and is included in the new curriculum framework for both new and in-service agents. With respect to supervision, supervisory agents are responsible for monitoring agents’ work and, more formally, they are to perform periodic case file reviews at least every 90 days on all cases being worked by their agents. During these case file reviews, supervisors are to monitor the progress of cases by reviewing investigative work completed accomplished, verifying compliance with any applicable policies and procedures (including the Guidelines), and assessing the validity of continuing with the case. They also review investigative work planned for the next period—including, for example, any significant data collection that will be employed—and discuss any issues associated with or approvals needed to carry out the investigative strategy. Nearly all the field agents who completed our questionnaire indicated that their supervisors performed case file reviews every 90 days—more often in some cases. As an additional oversight, FBI officials told us that field office Assistant Special Agents-in-Charge periodically check supervisory case file reviews to ensure the adequacy of the case file review process. No specific changes to the FBI’s supervisory case file review process were made in response to the issuance of the revised Guidelines. The FBI’s Inspection Division is responsible for reviewing FBI program divisions and field offices to ensure compliance with applicable laws and regulations and the efficient and economical management of resources. The Inspection Division attempts to regularly inspect all FBI units at least once every 3 years. Among other things, inspectors review field office case files to (1) assess the adequacy of supervisors’ case file reviews and (2) ensure that investigative work complies with administrative and investigative policies and procedures. According to FBI headquarters inspection officials, it is in the context of reviewing case files that inspectors determine compliance with the procedures and other guidance contained in the Guidelines. We reviewed selected FBI inspection reports completed since October 1999—including the most recent inspections for the 14 field offices we visited and 4 other field office inspections completed after the Guidelines were issued. Our review confirmed that inspectors were reviewing compliance with the Guidelines and adequacy of supervisory case files reviews during their inspection. We noted the following inspection findings: In four inspections, a preliminary inquiry was not converted to a full investigation after expiration of the initial authorization period. In seven inspections, some case file reviews were not performed in a timely manner. In one inspection, an investigation was opened without approval by the field office Agent-in-Charge or notification to FBI headquarters. With respect to the new investigative authorities granted under the revised Guidelines, in reviewing the four inspection reports completed after the Guidelines were issued, there were no findings related to FBI noncompliance with these new investigative authorities. The FBI’s inspections process was reengineered in late 2002, resulting in revisions to the various inspection audit guides and checklists that inspectors use to gather advance data about program operations and investigative activities and plan their work. In reviewing these audit guides, we found two program review guides that included a reference to the Guidelines—that is, that inspectors should “verify compliance with Attorney General Guidelines relating to the initiation, renewal, or continuance of investigations or investigative techniques.” According to the FBI’s Chief Inspector, it is not necessary to incorporate specific references to the revised Guidelines into the inspection audit guides, since inspectors are already verifying compliance with all Attorney General Guidelines (and other policies and procedures) by reviewing case files and supervisory case file reviews. Within the FBI, the Office of Professional Responsibility (OPR) is generally responsible for investigating and adjudicating allegations of misconduct by FBI employees. OPR’s investigative case activity is shown in table 10, below. OPR does not currently capture statistics regarding the total number of allegations received or the number of allegations that are closed without inquiry. However, OPR officials told us they were not aware of any cases involving violations of the authorities in the revised Guidelines related to terrorism investigations. Based on their standardized offense codes and the time period identified above, they identified a number of closed cases involving violations of Attorney General Guidelines, violations of individual civil rights, and violations of investigative policies and procedures. However, they told us that the only way to verify whether any of these cases specifically involved some aspect of the revised Guidelines would be to review each of the individual investigative case files. An OPR official told us that a redesign of their computer system is in progress, and additional information on allegations received and investigations opened will be captured when the redesign is complete. However, no changes are planned to allow the tracking of misconduct cases specifically related the revised Guidelines. Within the Department of Justice, the Office of Inspector General (OIG) also has responsibility for ensuring that allegations of FBI misconduct are appropriately handled. Beginning in July 2001, all allegations against FBI employees were to be submitted initially to the OIG for review. The OIG then decides which complaints it will investigate and which it will refer back to OPR for investigation. As shown in table 11, most allegations of FBI misconduct are referred to OPR for investigation or other disposition. The OIG did not specifically track the number of allegations involving the Guidelines, but they did report that the most common complaints received were job performance failure, waste and misuse of government property, and other official misconduct. The OIG also has responsibility under the USA Patriot Act to receive and investigate all allegations of civil rights or civil liberties abuses raised against DOJ employees. Between October 2001 and February 2003, the OIG received 35 allegations involving FBI violations of individual civil liberties, 2 of which were reported to involve noncompliance with Attorney General Guidelines. Upon further review, however, one involved an illegal search, one involved a coerced statement, and neither involved noncompliance with the new authorities granted under the Guidelines. As part of its mission to oversee DOJ programs and operations, the OIG currently plans to conduct an evaluation of the FBI’s entire process of employee discipline. Furthermore, in April 2003, the OIG began a review of the FBI’s implementation of all Attorney General’s Guidelines that were revised in May 2002—including the domestic investigative guidelines. When the revised Guidelines were issued, private sector groups raised concerns about what they saw as a relaxing of investigative controls over the FBI, which represented a potential threat to individual civil liberties. For example: Private sector officials said that the FBI is now allowed to gather information at any place or event that is open to the public—even in the absence of any indication of criminal activity. This encourages a return to the days when the FBI sent agents into churches and other organizations during the civil rights movement, in an attempt to block the movement and suppress antigovernment dissent. These officials also noted that liberalization of the Guidelines which allows the FBI to access and analyze data from commercial and private sector databases will result in a return to profiling of individuals and building of intelligence dossiers. The inaccuracy or misuse of such data could lead to innocent persons being suspected of crimes. None of the private sector officials we met with could provide specific examples of the FBI abusing the new authorities granted under the Guidelines. Rather, their concerns stemmed from the notion that granting the FBI broader investigative authorities—which can be used even in the absence of any suspected criminal activity—not only ignores the lessons of past abuses, but is unlikely to result in any tangible gains in law enforcement. FBI headquarters officials said that the supervisory case file review process is the primary vehicle to ensure that agents comply with applicable policies and procedures—including the Guidelines. Regarding the authority to visit public places and events, FBI field office managers told us that, considering the number of legitimate leads coming in and the number of ongoing preliminary inquiries and investigations, agents are fully tasked to support existing work and do not have the time or need to visit public places or surf the Internet to generate additional leads. Based on our field visits, however, we found that some agents are proactively using the new investigative authorities granted under the revised Guidelines. As shown in table 12, as of April 2003, 64 (about 36 percent) of the 176 agents who completed our questionnaire indicated they had accessed commercial information or databases, 53 (about 30 percent) conducted online Internet searches or accessed online sites, and 31 (about 18 percent) visited public places or events, prior to opening a preliminary inquiry or investigation. In addition, most of the agents who completed the questionnaire indicated prior supervisory approval was not needed to perform these activities. To help assuage public concerns about civil liberties issues, the FBI has been reaching out to communities to assure them that, despite the emphasis on counterterrorism, investigating abuses remains a high priority of the FBI. FBI field offices have been tasked to contact Muslim leaders for the purpose of establishing a dialogue and discussing procedures for alerting the FBI to civil rights abuses. For example, in one field office we visited, five meetings were held during the first 4 months of 2003— including meetings with Muslim community leaders and a panel discussion to answer questions from the public—covering topics related to homeland security, FBI employment, and community outreach. Throughout the FBI, over 500 outreach meetings occurred during the first 5 months after September 11, 2001. In addition, some FBI field offices have provided sensitivity training to field agents on the Islamic religion and culture. Finally, regarding the new investigative authority to visit public places and events, FBI headquarters officials are currently considering whether to require mandatory supervisory approval prior to allowing an agent to enter a public place or attend a public meeting. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Following the September 11, 2001, terrorist attacks, the FBI needed to refocus its efforts to investigate those attacks and to detect and prevent possible future attacks. To do this the FBI has taken steps to change its priorities and sought to transform itself to more effectively address the potential terrorist threats. This testimony specifically addresses the FBI's (1) progress in updating its strategic plan; (2) development of a strategic human capital plan; (3) realignment of staff resources to priority areas; (4) reallocation of staff resources from its drug program; (5) efforts to recruit and hire new personnel to address critical staffing needs; (6) efforts to enhance its training program; and (7) implementation of new investigative authorities and internal controls to ensure compliance with the revised Attorney General's Guidelines on General Crimes, Racketeering Enterprise and Terrorism Enterprise Investigations and to help protect individual civil liberties. Last June, GAO highlighted the importance of the FBI's success in transforming itself, noting several basic aspects of a successful transformation. Thus far, GAO is encouraged by the progress that the FBI has made in some areas in the past year, but a number of major challenges remain. The commitment of Director Mueller and senior level leadership to the FBI's reorganization and the FBI's communication of priorities warrant recognition. However, a comprehensive transformation plan with key milestones and assessment points to guide its overall transformation efforts is still needed. The FBI has also not completed updating its strategic plan and has not developed a strategic human capital plan, although it has made some progress in both these areas. To better ensure focus on the highest priorities, over the last year, several actions were taken, including permanently redirecting a portion of the field agent workforce from criminal investigative programs to counterterrorism and counterintelligence. However, the FBI continues to face challenges in critical staffing areas including: (1) utilizing staff resources from other criminal investigative programs to address counterterrorism, and (2) a lack of adequate analytical and technical assistance and administrative support personnel. The FBI's efforts to address critical skill needs and revise its training program are commendable. GAO also found internal controls in place to help ensure compliance with the revised Attorney General's Guidelines and protect individual civil liberties.
The Occupational Safety and Health Act of 1970 covers more than 100 million working men and women and about 6.5 million employers.Excluded from coverage are the self-employed; state and local government employees in some states; and some transportation workers, miners, and others covered by other federal laws. OSHA regulations require most employers covered by the act to keep records at each establishment, including a log and summary of occupational injuries and illnesses (OSHA form 200 or an equivalent form) and a supplementary record of occupational injuries and illnesses (OSHA form 101 or an equivalent form). On the log, employers must briefly describe all occupational injuries and illnesses that occur at the establishment and summarize that information yearly. Employers must make the log accessible to authorized federal and state officials and to employees upon request and post an annual summary of occupational injuries and illnesses for the previous calendar year at each establishment. The supplementary record is to provide information about each injury and illness on the log, such as the affected employee’s name and the circumstances of the injury or illness. Authorized government officials must have access to these records also. The records employers must keep provide useful information for (1) employers and employees, raising their awareness of injuries and illnesses and helping them in their efforts to address establishment hazards; (2) OSHA staff for carrying out enforcement and outreach programs; and (3) statistical purposes, by measuring the magnitude of injury and illness problems nationwide. The information also helps OSHA develop safety and health standards and conduct research on the causes and prevention of such injuries and illnesses. In addition, BLS collects injury and illness data from employers for its annual survey of occupational injuries and illnesses. OSHA’s compliance officers review and collect data from the records during on-site inspections. In February 1996, as part of its initiative to enhance safety, reduce paperwork, and reinvent the agency, OSHA proposed comprehensively revising the current rule for record-keeping requirements. The overall rule addressed certifying the records’ accuracy and completeness, requiring employers to provide increased access to the records, defining key terms, and updating records to reflect changes in previously recorded data. The proposed rule would also create a system for OSHA to collect injury and illness data at the establishment level. Under the proposed data collection system, OSHA would (1) identify industries among the most hazardous based on their injury and illness rates as reported by the BLS annual survey of occupational injuries and illnesses and (2) survey establishments in these industries to collect establishment-specific injury and illness data. Establishment-specific data would help identify individual establishments with high rates of occupational injuries and illnesses. OSHA said it would focus its enforcement and outreach efforts on establishments with the highest injury and illness rates. Its inspection priority system would remain unchanged—highest priority would still be given to unscheduled inspections—but its process for scheduling programmed inspections would be based on these establishment-specific data. In addition, OSHA said the data collection system would enhance the agency’s ability to measure its performance in achieving established goals for reducing injuries, illnesses, and fatalities. Although the proposed overall rule to revise the record-keeping and reporting requirements had not been completed, in February 1996, OSHA initiated its survey of employers to collect injury and illness data for calendar year 1995. In March 1996, the American Trucking Associations and others filed a lawsuit challenging OSHA’s authority to compel employers to participate in this survey in the absence of a final rule. A federal district court ruled that OSHA did not have the authority to issue citations to employers who did not complete and return the survey. In February 1997, OSHA issued a final rule implementing its authority to survey employers and cite them for failing to respond. Also in February 1997, OSHA and the parties involved in the lawsuit agreed that OSHA would not use the survey data collected in 1996 for enforcement purposes, but it could use the data for other purposes. OSHA used a two-stage process for selecting industries and establishments to include in its data collection surveys. First, OSHA selected the industries for its surveys using mainly industrywide data on injuries and illnesses. Second, within the industries selected, OSHA chose individual establishments to survey on the basis of establishment size. OSHA’s objective was to survey all establishments of a specific size in the industries with the highest injury and illness rates. OSHA said a major determinant of the number of establishments it could survey in a year, however, was the amount of funds available for conducting the survey. With about $2.6 million available annually to fund the surveys in 1996, 1997, and 1998, OSHA determined it could survey up to 80,000 establishments each year. To select the industries among the most hazardous, OSHA used data obtained from BLS’ annual surveys together with other factors such as work-related fatalities and the number of establishments most likely to be included in the survey in each industry. Because data and documentation supporting these decisions were not available, however, we could not assess the extent to which each factor contributed to OSHA’s selecting—or excluding—an industry. According to OSHA officials, the agency decided to include in its first three surveys all manufacturing and nonmanufacturing industries considered most likely to be hazardous for which it has responsibility. OSHA officials said it included all manufacturing industries because (1) manufacturing industries are required to maintain OSHA injury and illness records, (2) OSHA compliance standards to a large extent focus on manufacturing industries, (3) some manufacturing industries have injury and illness rates that are among the highest in all the SIC codes, and (4) the large number of manufacturing SIC codes (and the large number of establishments in each manufacturing group) made it impractical to separate high-hazard manufacturing industries from low-hazard manufacturing industries for the surveys. According to OSHA, the main factor it used to identify nonmanufacturing industries most likely to be hazardous was the industry three-digit SIC code LWDII rate as reported by BLS. OSHA also considered work-related fatalities when selecting industries to survey. Other factors OSHA considered in its selection process for those years included whether the injuries reported by establishments occurred at a fixed facility or at an off-site location and the number of establishments in each industry that would most likely be included in the surveys. For the 1996 and 1997 surveys, OSHA chose all manufacturing industries, six industry groups (three-digit SIC codes), and eight specific industries (four-digit SIC codes) in nonmanufacturing SIC codes. The nonmanufacturing industries selected had high LWDII and injury and illness incidence rates at the three-digit SIC code level, according to calendar year 1993 BLS data. Four of the specific industries and one industry group that OSHA selected, according to the agency, also had high numbers of fatalities during the previous 10 years. (See fig. 1.) As shown in figure 1, all 14 nonmanufacturing industry groups and specific industries OSHA selected in 1996 and 1997 had LWDII rates and injury and illness incidence rates that exceeded the national averages for all industries. Furthermore, most industries selected were among those with the highest LWDII rates. Because data were not available, however, we could not assess the extent to which each factor contributed to OSHA’s selecting or excluding industries. OSHA expanded the 1998 survey to include additional industries in the survey database. According to OSHA, it included all manufacturing industries and the 14 nonmanufacturing industry groups or specific industries in the previous surveys again. It replaced four specific industries on the survey list with the three industry groups of which the specific industries are a part. OSHA also included three industry groups in 1998 that were not in the previous surveys. As a result, for 1998, OSHA selected a total of 16 nonmanufacturing industry groups or specific industries for the survey. According to OSHA, it based selections for the 1998 survey exclusively on calendar year 1995 injury and illness data obtained from BLS; fatalities and other safety and health factors were not considered. Each of the newly selected industry groups included in OSHA’s 1998 survey had LWDII and injury and illness incidence rates that exceeded the national averages for all industries. (See fig. 1.) OSHA selected establishments to survey for all 3 years from within each of the chosen industries on the basis of establishment size. OSHA officials said they did this to include in the surveys all establishments of certain sizes in each of the industries selected, rather than survey a sample of establishments in these industries. For the 1996 and 1997 surveys, OSHA mailed surveys to all establishments with 60 or more employees in manufacturing industries and in each of the 14 selected nonmanufacturing industries. For the 1998 survey, OSHA mailed survey forms in March 1998 to all establishments in the newly selected industries with 50 or more employees. It also mailed forms to some of the establishments included in the 1997 survey: (1) those that did not return their forms, (2) the largest establishments in each state, and (3) those that had reported an LWDII rate of 7.0 or higher. For its 1996, 1997, and 1998 surveys, OSHA asked employers to provide summary information on their employees’ injuries and illnesses during the previous calendar year. Because OSHA already requires the establishments to compile this information, employers are not required to develop new data sets. OSHA also asked employers to provide certain employment information for the establishments. Although OSHA and BLS collect the same injury and illness and employment data from establishments that participate in their respective programs, BLS collects the data from a small sample (less than 3 percent) of all private- sector industry establishments and uses the information to generate aggregate statistics on occupational injuries and illnesses at the state and national levels. Because BLS pledges confidentiality of the data to employers, it does not share these data with OSHA. OSHA, on the other hand, needs establishment-specific data to identify individual establishments’ LWDII rates and injury and illness incidence rates to more effectively and efficiently carry out its regulatory and enforcement activities. Because it cannot obtain these data from BLS and they are otherwise unavailable, OSHA collects injury and illness data from all establishments of a certain size within selected industries. The OSHA data collection survey form is identical to a portion of the BLS annual survey of occupational injuries and illnesses form. The wording of the instructions, examples, and questions on the OSHA survey form is identical to that on the BLS survey form. In addition to the injury and illness and employment data, both data collection forms ask for the name, telephone number, date, and signature of the person to contact if any questions arise about the information provided. This contact information also allows OSHA and BLS to verify the data provided. BLS also collects information that OSHA does not on the demographics of injured and ill workers and the circumstances of the injuries and illnesses for a sample of cases that required recuperation away from work. To minimize employers’ burden, OSHA and BLS instruct employers responding to their surveys to copy on their survey forms the requested injury and illness data from the log and summary of occupational injuries and illnesses they are required to maintain. In addition, because some establishments from which OSHA collects data may be included in the BLS sample in a given year, OSHA has coordinated its data collection effort with BLS’. (BLS estimated that less than 10 percent of the establishments selected for the OSHA data collection effort would be included in the BLS sample in any year.) Establishments required to report to OSHA and BLS may use a single form and send a copy to each agency. The data collection form includes a section in which the respondent can provide summary information specific to the selected establishment. The first part of the summary section requests the average annual number of employees and the total number of hours that employees worked during the previous calendar year. It also requests information on conditions during the year, such as a strike or a shutdown, that might have affected the number of employees or the hours they worked. The second part of the form requests the following information from the total line of the log and summary of occupational injuries and illnesses maintained by each establishment: total injuries, including the number of deaths as a result of injury, injuries with days away from work or restricted workdays or both, total days away from work, total days of restricted work activity, and injuries without lost workdays; total illnesses, including deaths as a result of illness, illnesses with days away from work or restricted workdays or both, total days away from work, total days of restricted work activity, and illnesses without lost workdays; and types of illnesses experienced by the workers, including skin diseases or disorders, diseases of the lungs due to dust, respiratory conditions due to toxic agents, poisonings, disorders due to physical agents, disorders associated with repeated trauma, and other occupational illnesses. The information collected enables OSHA to compute each establishment’s LWDII rate and injury and illness incidence rate. See the appendix for a copy of the OSHA data collection form. OSHA, in announcing its plans to collect establishment-specific injury and illness data by mail, indicated that such information would be used in a variety of ways to help OSHA carry out its responsibilities more efficiently and effectively. The intended uses were (1) directing OSHA’s program activities, including the scheduling of establishment inspections under its enforcement program and the targeting of mailings of safety and health information to employers under its nonenforcement programs; (2) monitoring and tracking injury and illness incidents; (3) developing information for promulgating, revising, and evaluating OSHA’s safety and health standards; (4) evaluating the effectiveness of OSHA’s enforcement, training, and voluntary programs; and (5) providing pertinent information to the public. In addition, OSHA stated that the establishment-specific data were necessary for it to meet GPRA requirements, which direct federal agencies to implement a program of strategic planning, develop systematic measures of performance to assess the impact of individual government programs, and produce annual performance reports. Although OSHA collected establishment-specific injury and illness data during 1996 and 1997, as of April 1998, it had made only limited use of the data. None of the intended purposes has been fully implemented, and the data have not been used for other purposes. About 70,000 establishments responded to both the 1996 and 1997 surveys—about 88 percent of the establishments surveyed. According to OSHA officials, firm plans for using the data involve enforcement activities and meeting performance goals it established under GPRA. The data will be used as part of Labor’s performance measurement system to track the impact of OSHA’s enforcement and compliance assistance interventions. For example, to measure the extent to which OSHA achieves its goal of reducing injuries and illnesses by 15 percent in high-hazard industries, such as food processing and logging, the agency will track survey data from employers in these industries. According to OSHA’s directive (CPL 2-0.119), the agency also plans to use survey data to schedule enforcement activities for establishments with the highest LWDII rates. OSHA will use the data to identify the 500 establishments with the highest rates and schedule them for on-site inspections. In addition, OSHA wants to use the establishment-specific injury and illness data to identify employers for participation in its new Cooperative Compliance Program (CCP). Under this program, OSHA would invite employers who report high LWDII rates on the survey to work cooperatively with OSHA to eliminate the hazardous working conditions. These employers would be put on a list of those most likely to be inspected; however, if these employers agree to participate, they must agree to establish an effective safety and health program. They must also agree to (1) find and remove hazards, (2) work toward reducing injuries and illnesses, (3) fully involve employees in their safety and health program, (4) share injury and illness data, and (5) provide OSHA with information from their annual injury and illness records. Under CCP, employers with 100 or fewer employees who choose to participate and agree to seek free assistance from their state OSHA consultation program to establish effective safety and health programs reduce their likelihood of being inspected by OSHA to 10 percent. CCP participants with more than 100 employees and smaller employers not using consultation services face a 30-percent chance of being inspected. If identified employers do not agree to participate in the program, they will remain on OSHA’s list for on-site inspection. According to OSHA officials, inspections of CCP participants will most likely be shorter than regular inspections and result in lower penalties than normal because of these employers’ commitment to finding and eliminating hazardous working conditions in their establishments. OSHA believes that those who successfully fulfill the requirements of the program should reduce injuries, illnesses, and fatalities, leading to lower workers’ compensation costs and reduced insurance costs. In addition, workers whose employers join the program will be more involved in establishment safety and health issues and should experience fewer injuries and illnesses and have an improved quality of work life. OSHA will also benefit by extending its resources and expanding the base of employers with safety and health programs, which OSHA believes is a major difference between employers with low injury rates and those with high rates. OSHA used the information it collected in 1997 to develop a list of about 12,500 establishments with the highest LWDII rates—that is, LWDII rates of 7.0 or higher. OSHA scheduled the 500 establishments with the highest LWDII rates for inspection and began these inspections in December 1997. In November 1997, OSHA invited about 12,000 of these establishments—less than 20 percent of those that responded—to participate in the CCP. According to OSHA officials, more than 89 percent of the employers invited by OSHA agreed to participate in the program. In response to a lawsuit filed by the U.S. Chamber of Commerce and others claiming that OSHA had not followed proper procedures in implementing the CCP, however, a federal court of appeals ordered OSHA in February 1998 to halt the enforcement program that includes CCP until the court decides whether the program is valid. According to OSHA officials, oral argument is scheduled for December 1998, and a decision is unlikely to be issued until some time in 1999. The order also required OSHA to stop conducting its inspections of the 500 establishments with the highest LWDII rates; 89 of these inspections had been completed when OSHA was told to stop conducting them. OSHA officials stated that this delay in implementing the CCP will adversely affect many of its enforcement and nonenforcement activities. In April 1998, OSHA began implementing an interim inspection scheduling plan. Under the plan, OSHA will schedule for inspection establishments in 99 industries with LWDII rates of 6.4 or higher, according to calendar year 1996 BLS data. Establishments in these industries for which OSHA collected data in 1997 with LWDII rates at or above the national average of the industry of which they are a part will be randomly selected for inspection. The interim inspection plan has no CCP component. In our 1994 report, we noted that one problem with relying on employer- provided data is the risk that employers may underreport injuries and illnesses if they know OSHA is collecting data about their establishments that could be used to target them for on-site compliance inspections. To reduce the risk of employers underreporting injury and illness data, OSHA needs to have a successful combination of enforcement and education. Therefore, we recommended in that report that OSHA implement procedures for ensuring that employers accurately record occupational injuries and illnesses. Because of its concerns about the quality of the data provided by employers responding to its surveys, OSHA is conducting on-site audits of employers’ injury and illness records to assess these records’ accuracy. OSHA has completed all of the 250 records audits it had planned to conduct. OSHA gave no assurances about privacy rights or confidentiality associated with the data collected from employers selected to participate in the data collection survey. Privacy rights of individual employees did not present a problem because the only information about workers that OSHA collected was summary information on injuries and illnesses, which does not identify individual workers. OSHA said it took steps to protect employers’ privacy rights and to maintain confidentiality of the information. OSHA did not pledge to employers that the data it collected in its surveys would be kept confidential, however, because the data could be subject to disclosure under FOIA. According to OSHA, this information would be made available to the public only in response to specific FOIA requests. The injury and illness data OSHA collected, however, are the same data that employers are required to post in their establishments each year. In contrast with OSHA, BLS pledges confidentiality to the full extent permitted by law to all participating establishments and informs the respondents that the data will be used for statistical purposes only. BLS tabulates and publishes data aggregated at the national and state levels by various characteristics, such as industry group, occupation, and age. BLS does not tabulate or publish injury and illness data on individual establishments. Over the years, BLS has received FOIA requests for the data, including requests for establishment-specific injury and illness data but has refused to disclose the data, relying on the FOIA exemption for confidential commercial or financial information. A federal district court has upheld BLS’ right to withhold from disclosure commercial or financial information that has been voluntarily provided to BLS under a pledge of confidentiality in large part because disclosure would impair the government’s ability to obtain the data in the future. Whether OSHA would have a valid basis to rely on the same exemption has not been determined. Under FOIA, a federal department or agency is required to disclose information to anyone who requests it, unless the information is covered by one of the law’s exemptions. Examples of such exemptions include trade secrets and individuals’ medical files. The medical files exemption excludes from disclosure any data from establishments that identify individual employees’ injuries and illnesses. Another exemption excludes information compiled for law enforcement purposes that would disclose techniques, procedures, or guidelines for law enforcement investigations. This exemption excludes from mandatory disclosure any data that might provide advance notice of an inspection. According to OSHA, it does not disclose collected establishment-specific data while such data are being used for scheduling inspections that might disclose the scheduling criteria. After the inspection is completed, however, the exemption no longer applies and the data may be subject to disclosure, OSHA officials said. From 1996 to 1998, OSHA received many FOIA requests about the data collection initiative. Many of these requests specifically related to requests about the CCP. The only establishments asked to participate in the CCP were those that responded to the data collection initiative, but, as already noted, implementation of the CCP has been postponed because of a lawsuit. Labor agencies handle all FOIA requests on a case-by-case basis. Most of the requests OSHA has received and responded to about the data initiative asked for the names and addresses of establishments identified as having LWDII rates high enough to be invited to participate in the CCP. OSHA has provided these requesters with the names and addresses of the establishments only. OSHA has also provided its field offices with the names and addresses of establishments in their regions to enable staff there to respond to similar requests. OSHA received one FOIA request for injury and illness data collected in the 1996 survey. As of April 1998, however, the agency had not released the requested information. OSHA does not know the number of FOIA requests received by its headquarters and field offices about the data collection initiative. Labor is not required to and does not collect data on the specific subjects of FOIA requests. In addition, although federal agencies and departments must annually report to the Department of Justice on the number and cost of FOIA requests and responses, detailed information on the subjects of FOIA requests is not required. Moreover, although Labor has a national FOIA coordinator, it does not centrally track all FOIA requests received. FOIA requests concerning establishments identified by the data collection initiative are decentralized: they may be responded to by OSHA headquarters, regional, or area office staff. OSHA headquarters officials told us that generally they neither oversee nor approve FOIA responses handled by OSHA regional and area staff; nor are they informed of all FOIA requests received in the field. According to OSHA officials, however, area, regional, and national staff responsible for FOIA activities may coordinate efforts when preparing FOIA responses. We provided a draft of this report to the Department of Labor for its review and comment. Although Labor did not provide written comments on the draft report, officials from OSHA and other offices provided technical comments, which we have incorporated as appropriate. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days after its issue date. At that time, we will send copies of this report to the Secretary of Labor and other interested parties. We will make copies available to others upon request. If you or your staff have any questions about this report, please call me at (202) 512-7014 or Larry Horinko, Assistant Director, at (202) 512-7001. Other major contributors to this report are John T. Carney, Evaluator- in-Charge; Ronni Schwartz, Senior Evaluator; and Robert G. Crystal, Assistant General Counsel. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Occupational Safety and Health Administration's (OSHA) efforts to collect establishment-specific data on injuries and illnesses. GAO noted that: (1) with about $2.6 million available annually in 1996, 1997, and 1998 for its data collection surveys, OSHA determined it could survey about 80,000 establishments each year; (2) within that constraint, OSHA used mainly Bureau of Labor Statistics data to select industries with high rates of injuries and illnesses; (3) OSHA used size of establishment as a determining factor for the number of establishments to survey; (4) in addition, OSHA knew some of the industries had high numbers of work-related fatalities; (5) OSHA surveyed establishments in these industries with 60 or more employees in both years; (6) employers surveyed were not required to develop new sets of injury and illness data to respond to OSHA surveys; (7) instead, these employers were already required by OSHA to keep at their establishments records of specific information on work-related injuries and illnesses; (8) OSHA also required surveyed establishments to provide information on employees' total hours worked and on the average number of employees who worked during the year; (9) OSHA planned to use the data collected to better identify establishments with the highest injury and illness rates so that it could more accurately target on-site compliance inspections to establishments with safety and health problems; (10) in addition, OSHA planned to use the data to better target its technical assistance and consultation efforts and to measure its performance under the Government Performance and Results Act of 1993 in meeting its goals of reducing establishment injuries and illnesses; (11) as of April 1998, however, OSHA had made only limited use of the data collected in its 1996 and 1997 surveys mainly because of two lawsuits; (12) a federal court ordered OSHA to halt implementation of a new program that, using the 1997 survey data, identified specific establishments with the highest lost workday injury and illness rates; (13) employers who declined to participate in this new program would remain on OSHA's list of employers most likely to be inspected; (14) the program has been suspended until the court issues a decision; (15) as a part of its data collection effort, OSHA gave no assurances about privacy or confidentiality when it requested establishment information from employers; and (16) OSHA has received many Freedom of Information Act requests for the names and addresses of the 12,000 establishments with high injury and illness rates that it invited to participate in the new program.
The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA) established four purposes for the TANF block grant: 1. provide assistance to needy families so that children may be cared for in their own homes or homes of relatives; 2. end dependence of needy parents on government benefits by promoting job preparation, work, and marriage; 3. prevent and reduce out-of-wedlock pregnancies; and 4. encourage two-parent families. Within these goals, states have responsibility for designing, implementing, and administering their welfare programs to comply with federal guidelines, as defined by federal law and HHS, including imposing a 5- year lifetime limit on many families receiving cash assistance. In addition, the block grant includes a maintenance-of-effort (MOE) requirement, which requires states to maintain a significant portion of their historic financial commitment to welfare-related programs. In fiscal year 2013, states spent a total of $31.6 billion in federal TANF funds and state MOE funds, with federal funds accounting for 53 percent of the total. The federal law that established TANF also created a TANF Contingency Fund that states could access in times of economic distress. In creating the TANF block grant, Congress emphasized the importance of state flexibility, and restricted HHS’s regulatory authority over the states except to the extent expressly provided in the law. In line with the second purpose, a key TANF goal is helping parents prepare for and find jobs. The primary means for measuring state efforts in this area has been TANF’s work participation rate. Generally, states are held accountable for ensuring that at least 50 percent of all families receiving TANF cash assistance and considered work-eligible participate in one or more of the federally-defined allowable activities for the required The law also contains a provision known number of hours each month. as the caseload reduction credit, which allows states to reduce the work participation rate they are required to meet based on reductions in the size of their TANF caseload. In addition, TANF regulations provide that states that spend more than their MOE requirements generally receive an increase in their caseload reduction credit. Our work has shown that over the years, states have engaged about one third of families receiving TANF cash assistance in federally-defined work activities nationwide while still meeting their required work participation rates. Generally, this is because many states have relied on the caseload reduction credit and other allowable options to meet the requirement rather than engaging more families in specified work activities to reach a 50 percent rate. This has raised concerns among some policymakers that states are meeting the letter but not the spirit of the law. PRWORA established higher work participation rate requirements and eliminated many exemptions from these requirements for recipients compared to what was in place prior to TANF. This reflected research that found that mandatory work requirements could reduce welfare receipt and increase employment among single mothers and help address concerns about long-term welfare receipt. In our prior work, we concluded that states’ use of the modifications currently allowed in federal law and regulations, as well as states’ policy choices, have diminished the usefulness of the work participation rate as a national TANF performance measure. The Claims Resolution Act of 2010 required states to report additional information on TANF recipients’ engagement in activities. An HHS report to Congress, based on 2011 state data, found that that 24 percent of work-eligible individuals met federal work participation rate standards, but more than 50 percent of work-eligible individuals had zero hours of participation (see fig. 1). While some of these individuals were exempted from participation or excluded for some other reason such as caring for a child under age one, at least some of these individuals had not been engaged in work activities by the state or local TANF agency.HHS data show that federal and state spending on work-related activities, such as subsidized employment, has declined in the last 3 years for which data are available (fiscal years 2011-2013) and has been under 8 percent of total TANF spending in each of these years. Almost half of individuals meeting the work participation rate standards did so through unsubsidized employment—working in a regular job while receiving cash assistance. This suggests that states’ ability to meet the work participation rate relies heavily on the activities of individuals who are already able to work rather than providing special assistance to those who have the most difficulty finding and keeping jobs. In fiscal year 2011, unsubsidized employment was the most common work activity among individuals participating in at least one hour of a countable work activity per month. Job search and job readiness assistance, followed by vocational education training, were the next most common activities (see fig. 2). Between 1987 and 1996, there was considerable experimentation with welfare-to-work programs under TANF’s predecessor program, Aid to Families with Dependent Children (AFDC), as HHS granted waivers to states to test and evaluate new approaches. A body of HHS-sponsored research resulted, showing that welfare-to-work programs can increase employment and reduce welfare receipt. This research helped inform the creation of TANF. Between 1987 and 1996, 46 states received waivers from existing AFDC requirements to test and evaluate new approaches to improve employment outcomes for people on welfare. These initiatives were referred to as section 1115 waivers because they were granted under section 1115 of the Social Security Act. According to HHS officials, HHS required states that applied for and received waivers from federal program requirements to have an independent organization evaluate its program change, supported by HHS and state funding. These waiver initiatives included placing time limits on the receipt of benefits and strengthening work requirements, among others. According to HHS, many of the policies and concepts included in state waiver requests were later incorporated into TANF. States that received section 1115 waivers under AFDC were allowed, under PRWORA, to continue to operate under these waivers until their expiration. However, the last of these waivers expired in 2007, and no provision in law allowed these AFDC waivers to be extended. Questions have been raised as to whether HHS continues to have the authority under section 1115 to waive certain TANF program rules; this report does not address this issue. Under law, HHS is required to conduct TANF and related research and the agency supports a broad array of research and evaluations. See appendix III for a list of relevant recent and ongoing HHS-funded evaluations. Since the 1996 welfare law replaced AFDC with the TANF block grant and gave states flexibility to set spending priorities, they have used a smaller share of the grant to provide cash assistance. Within the first 5 years of TANF, the number of families receiving cash assistance declined by over half and states shifted their TANF priorities to other forms of aid, or non-cash services. These can include any other services meeting TANF purposes, such as job preparation activities, child care and transportation assistance for parents, out-of-wedlock pregnancy prevention activities, and child welfare services. In fiscal year 1997, nationwide, states spent about 23 percent of federal TANF and state MOE funds on non-cash services. In contrast, states spent more than 66 percent of federal TANF and state MOE funds for these purposes in fiscal year 2013. While the TANF block grant still serves as the nation’s major cash assistance program for low-income families with children, states have also increasingly used it as a flexible funding stream for supporting a broad range of allowable services. However, we previously reported that the accountability framework currently in place in federal law and regulations has not kept pace with this evolution. Our prior work found that TANF’s accountability framework provides incomplete information on how states’ non-cash services are contributing to TANF purposes. We noted that Congress may wish to consider ways to improve reporting and performance information so that it encompasses the full breadth of states’ uses of TANF funds. The composition of the overall TANF caseload has also changed, with the percentage of “child-only” cases increasing from about 23 percent in fiscal year 1998 to about 46 percent in fiscal year 2011. These cases consist of families receiving cash assistance on behalf of children only, in contrast to other cases in which adults in the families also receive benefits on their own behalf. According to HHS, the increase in the percentage of child-only cases was primarily due to a sharp decline in the number of TANF cases with an eligible adult, rather than an increase in the number of child-only cases. Some TANF cash assistance recipients are work-ready while others face challenges that make them harder to employ. According to the preamble to a TANF final rule from 1999, several provisions of the law, including time limits, higher participation rate requirements, and fewer individual exemptions from participation requirements, taken together, suggest that states should serve more clients beyond just those who are job-ready. However, some state TANF officials we interviewed for a 2012 report said the pressure to meet TANF work participation rate requirements causes them to focus on the “ready-to-work” cash assistance population, which can leave the “hard-to-employ” population without services. Health issues, disability, substance abuse, criminal records, domestic violence, limited education, and responsibilities for disabled children or parents can all constitute employment challenges for TANF recipients, who may need enhanced assistance to prepare for, find, and keep jobs. States may generally only count a family’s participation in job readiness assistance, which can include mental health and substance abuse treatment, towards the work participation rate for up to 12 weeks in a year. A body of rigorous research points to four approaches that can be used to increase the employment and earnings of TANF cash assistance recipients: subsidized employment, treatment and employment services, career pathways, and modified work-first.this research is based on programs that operated, at least in part, during TANF’s predecessor program, AFDC, and have since ended; and some of these older studies focused on programs that predominantly targeted women. Some prior studies have also focused on individuals with education levels that are higher, on average, than the TANF population. Evaluation of Florida’s Back to Work program Florida’s Back to Work program used American Recovery and Reinvestment Act of 2009 funding to create subsidized jobs. Administered by regional workforce boards, it operated in 2010 and placed about 5,600 individuals in subsidized jobs. For-profit, non- profit, and public sector employers received subsidies that covered 80 to 95 percent of program participants’ wages for 6 months. For-profit organizations were asked to commit to hiring the participants after the subsidy ended and non-profits were encouraged to do the same. Program participants experienced significantly greater increases in unsubsidized employment compared to individuals who were eligible but did not take part in subsidized employment. Participants experienced a nearly $4,000 average increase in earnings over their prior earnings compared to about a $1,500 average increase for the comparison group. the use of public funds to create or support temporary work opportunities for people who might otherwise be unemployed. One model of subsidized employment—transitional jobs—focuses on the hard-to-employ by providing temporary, wage-paying jobs, support services, and job placement help to individuals who have difficulty getting and holding jobs in the regular labor market. Subsidized employment programs have been used for decades to provide income support for people who are unable to find jobs in the regular labor market and improve employability for people with limited work experience. These programs can vary, for example, in terms of the amount of wages that are subsidized; prospects for permanent, unsubsidized employment; the degree of worksite supervision provided; and whether the worker is employed directly or through an intermediary (see fig. 3). Research has shown that subsidized employment can increase employment and earnings, although some studies suggest that these gains may not last beyond the subsidized position. Nevertheless, these programs can provide individuals who are hard to employ with work experience and can have indirect effects, such as reduced recidivism among former prisoners. Subsidized employment programs in the 1970s and 1980s, during the time of TANF’s predecessor program, yielded positive results on employment and earnings. One such demonstration project tested a model in which 10,000 welfare recipients in seven locations received 4 to 8 weeks of training and spent up to one year in subsidized positions as home health aides. Most of the subsidized employment programs that produced long-term employment gains targeted women; disadvantaged men experienced fewer positive results following the period of subsidized employment. Following welfare reform, transitional jobs gained traction as a strategy to increase the employability of TANF recipients. The theory behind this approach is that people best learn to work through working. Program staff can observe and coach participants on skills such as developing good work habits and dealing with coworkers to improve their future success in an unsubsidized job. Participants may be able to subsequently acquire jobs they would not otherwise have access to. Recent studies of transitional jobs programs have not shown sustained increases in employment, although these programs had indirect effects of reducing recidivism and reliance on welfare receipt. As part of the American Recovery and Reinvestment Act of 2009, Congress provided funding through the TANF Emergency Contingency Fund that in part helped states cover the costs of increased expenditures for subsidized employment programs. Thirty-nine states and the District of Columbia reportedly used $1.3 billion from the Emergency Contingency Fund to create 260,000 subsidized jobs, according to research. Although these efforts were not focused solely on TANF cash assistance recipients, experts we spoke with said the availability of additional funds renewed interest among states in subsidized employment. According to one study, these efforts involved the private sector in creating subsidized positions to a greater extent than in the past. The American Recovery and Reinvestment Act of 2009 created a $5 billion Emergency Contingency Fund for state TANF programs, available in fiscal years 2009 and 2010. Pub. L. No. 111-5, § 2101(a)(1), 123 Stat. 115, 446. Evaluation of New York City’s Personal Roads to Individual Development and Employment (PRIDE) New York City’s PRIDE was a large-scale welfare-to-work program for recipients with work-limiting medical or mental health conditions. This program operated from 1999 to 2004, and served more than 30,000 people. Staff performed in-depth assessments of participants’ work and education histories and medical conditions, then assigned them to activities—including unpaid work experience, education, and job placement assistance— that took account of their medical conditions. An evaluation of 3,000 randomly-assigned participants found that PRIDE significantly reduced the amount of welfare that participants received, in part because it generated increases in employment. Thirty-four percent of participants worked in jobs covered by unemployment insurance within 2 years after entering the study, compared to 27 percent of the control group. However, about two- thirds of participants never worked during the 2-year study period. Demonstration is evaluating seven subsidized employment programs that target current, former, or potential TANF recipients, low-income noncustodial parents, and others. The Department of Labor-funded Enhanced Transitional Jobs Demonstration is evaluating seven sites for which it has provided grants for transitional jobs programs that target either non-custodial parents or former prisoners. HHS and the Department of Labor entered into a memorandum of agreement to coordinate these studies through shared data collection instruments and evaluation sites and coordinated reporting efforts. Both projects incorporate lessons from earlier research on transitional jobs and test whether providing subsidies to private employers can be effective for less job-ready participants. Reports on the results of these studies are scheduled for 2015 through 2018. Treatment and employment services – This approach provides treatment for mental health needs, substance abuse, or a physical disability as well as employment services in different combinations and sequences (see fig. 4). The theory behind this approach is that treatment can help to stabilize a health condition to make steady work possible, while work experience can help participants learn how to manage problems that could otherwise prevent them from retaining a regular job. Rigorous research has shown some limited impacts for this approach. Evaluation of one program that offered a mix of unpaid work experience, educational activities, and job search assistance to TANF recipients with work-limiting health conditions and disabilities showed, to some extent, increased employment that lasted for two years. Another evaluation for a program in which staff visited participants’ homes to teach them life skills showed substantial increases in employment and earnings for those considered very hard to employ, but the larger population of participants receiving services experienced only small increases in employment and earnings. In terms of future research possibilities, HHS is considering whether emerging research from the fields of psychology and neuroscience on executive functioning—which is related to behaviors such as goal- setting, self-regulation, planning, and problem-solving—could help If so, it may have applications for adults succeed in employment.TANF. Career pathways – This approach has been defined in different ways, but can involve providing contextual learning to prepare individuals of various skill levels to advance in a high-demand occupation or industry. Reading, writing, and math skills are taught using real-life materials and situations from the industry in which they will be used. Employers in the targeted sectors help to determine what skills are required for participants to become employed and advance their careers in growing industries. Workers of varying skill levels can use multiple entry or exit points to advance within a specific sector or occupational field and gain industry-recognized credentials through a clear sequence of education, training, or a combination of education and training. Participants are provided career counseling and other support services (see fig. 5). This approach builds on results from a random-assignment evaluation of sector-based employment, indicating that programs offering sector- specific training can increase the employment and earnings of low- income individuals. The approach is also informed by lessons from prior research showing that adult basic education alone has not been successful in connecting low-skilled individuals to jobs. Research on education-oriented welfare-to-work programs that operated in the 1980s and 1990s showed that the most successful programs offered short-term assignments that did not allow participants to “languish” in activities without making progress. This is in contrast to remedial education and GED preparation programs that either had difficulty retaining participants or kept them for years without clear progress. These prior findings informed a call for training that promotes career advancement, integrates basic education and skills training, and engages local employers, while providing support services to TANF recipients to improve program retention. Public/Private Ventures’ Sectoral Employment Impact Study Results from Public/Private Ventures’ Sectoral Employment Impact Study based on follow-up interviews 24 to 30 months after random assignment suggest that sectoral programs can increase the employment and earnings of traditionally disadvantaged workers. The study, launched in 2003, evaluated three sectoral programs that train workers for skilled positions in a range of industries, including healthcare, manufacturing, information technology, and construction. The study assessed impacts on employment, earnings, hourly wages and access to work-related benefits. Programs included in the study offered training that was focused on a specific sector or sectors and that took no more than a year to complete. Over the follow-up period, treatment group participants were employed an average of 1.3 months more than individuals in the control group. Individuals enrolled in training also earned about $4,500 more than individuals in the control group over a 2-year period and were also more likely to work in jobs with higher wages and benefits. As confirmed by HHS officials, there are no completed rigorous experimental impact studies of a comprehensive career pathways program, but HHS is funding relevant ongoing evaluations, as part of an inter-agency effort with the Departments of Education and Labor to promote the use of career pathways. One of the studies, known as the Innovative Strategies for Increasing Self-Sufficiency project, is a nine-site, random assignment evaluation of career pathway programs. Initiated in 2007, the study will last 10 years, and early results are expected in the next 2 to 3 years. HHS also has a research portfolio for evaluating health care-related education and training programs operated with Health Profession Opportunity Grants (HPOG). These grants are targeted to TANF recipients and other low-income individuals and aim to prepare them for occupations in the health care field that pay well and are expected to either experience labor shortages or be in high demand. Three of the HPOG grantees are to have site-specific impact evaluations as part of the Innovative Strategies for Increasing Self-Sufficiency project with reports expected in 2016. An interim analysis of the impacts of HPOG is expected in June 2016. HHS expects that further follow-up analysis of both Innovative Strategies for Increasing Self-Sufficiency and HPOG impacts will be released in 2018. Research Snapshot: Modified Work-First Approach The National Evaluation of Welfare-to- Work Strategies (NEWWS) NEWWS examined the long-term effects of 11 mandatory welfare-to-work programs, in seven sites, on welfare recipients and their children. These programs took different approaches to helping welfare recipients find jobs, advance in the labor market, and leave public assistance. During the 1990s, more than 40,000 single-parent families were tracked over 5-year follow-up periods. A Portland, Oregon program outperformed the other programs that were examined. It offered education or training to some participants, depending on the caseworkers’ assessment of the individuals’ skills and needs, and encouraged all participants to hold out for jobs that paid better than the minimum wage and offered stable employment. In the program’s fifth year, it produced a 6.4 percent increase in employment and 14.6 percent increase in average earnings, compared to the control group. responses to the Request for Information. According to HHS officials, this information will yield insights on (1) the benefits of and challenges to aligning diverse funding streams, programs, and stakeholders around career pathways systems; and (2) the current and potential future use of career pathways systems to help at-risk populations, including TANF cash assistance recipients, gain skills and access the middle class. Modified work-first – This approach entails a strong work focus and some upfront education or training, as needed (see fig 6). Research on welfare-to-work programs during the 1990s sought to test the effectiveness of programs that emphasized rapid employment (“work first”) through mandatory job search compared to those that focused first on mandatory education and training. One program combined aspects of both of these approaches: it included a strong employment focus, the use of both job search and short-term education or training, and an emphasis on holding out for a good job, as opposed to participants taking the first job they were offered. Research shows that programs using this approach increased employment and earnings and reduced welfare receipt. This approach appears more effective than programs that focus solely on education and training or solely on job search. We examined 10 ongoing programs that all help TANF cash assistance recipients to gain employment, using various different elements of the above cited promising approaches to meet a range of participant needs (see table 1). These populations included those whose limited prior work experience prevented them from gaining employment on their own, those with one or more work-limiting characteristic such as a mental or physical disability or substance abuse problem, and those in need of additional skills to enter a new field or to increase their employability. All of the selected programs serve TANF recipients: 5 of the 10 serve TANF recipients exclusively, while the other programs serve a broader population. Some of the programs we examined offered subsidized employment to those whose limited prior work experience prevents them from gaining employment on their own. For example: San Francisco’s Jobs Now! program uses a community jobs model similar to a transitional employment program to place participants in employment positions. The program includes 6 months of subsidized employment and is targeted to those with limited experience or exposure to work. Some participants are placed in public service training programs where they are employed with the local human services agency or other city departments like the Office of Parks and Recreation. Other participants who are more job ready are placed in a wage-subsidy program with private businesses. Under this program tier, Jobs Now! subsidizes 100 percent of the wage for the first month, 75 percent for the second month, and $1,000 for the next 3 months. Employers also agree to make a good faith effort to retain the employee after the subsidy ends. Administrators reported that the retention rate with private employers—once the subsidy ends—is between 75 and 80 percent. Similarly, Erie County, New York’s PIVOT wage subsidy program requires a commitment from employers to hire participants after the wage subsidy ends. PIVOT is designed to serve those TANF recipients who are among the more work-ready, but who did not find jobs on their own during previous job search activities. When participants are referred to the program through the local TANF agency, they receive a skills assessment and enter a job-readiness training program for 45 days while their eligibility is being determined. Participants are screened to match their skill levels with employer needs. PIVOT places participants in local employment—in industries such as banking, hospitality, legal services, manufacturing, health, and childcare—for 6 months during job training. Participants’ days are typically split between a work experience program and other work activities such as GED classes or barrier remediation, such as English lessons for refugees with language barriers. Los Angeles County’s large subsidized employment program includes a component in which participants can gain work experience with for- profits, nonprofits or government agencies, and an on-the-job training component that places participants with private sector employers. For the latter, a subsidy for the participants’ wages is sent to a private job provider during the first two months of employment. After those 60 days, the wage is paid directly by the employer, who receives $350 for each part-time employee and $500 for each full-time employee. Three of the ten programs we examined used the treatment and employment services approach to serve participants with one or more physical or mental disabilities, substance abuse, or other work-limiting characteristics. Utah’s Licensed Clinical Therapist program provides services for clients with substance abuse, domestic violence, and mental health problems. An administrator said participants often enter the program under emotional turmoil that—if untreated—would prevent them from gaining stable employment. To address this need, clients receive a clinical assessment to diagnose mental health problems and they can participate in clinical therapy sessions offered by the program in combination with job search and resume building activities. New York City WeCARE was designed to provide comprehensive services to hard-to-employ clients with mental or physical health needs or substance abuse problems. The program uses an in-house comprehensive assessment of physical, psychological, and social needs by a board-certified doctor and social worker to determine whether a client is fully employable, can participate in work-related activities with limitations, needs treatment to stabilize a condition before engaging in work-related activities, or if the client could be eligible for federal disability benefits. One important program element is flexibility. For example, participants have a window of several days to make a scheduled appointment. A contractor told us that by offering flexibility, the program is better able to engage participants whom other programs might automatically sanction for non-participation. Ramsey County, Minnesota’s FAST program is a partnership between agencies that co-locate services like mental health treatment, vocational rehabilitation, community health care, and TANF employment supports. Administrators said with the right supports and an individualized service plan, clients with mental or physical health or substance abuse problems –who are not eligible for federal disability benefits—can gain employment. FAST contracts services with Goodwill/Easter Seals, which provides day-to-day supervision of participants and facilitates the individual placement and supports created for them. Home visits are used as needed to assess families’ needs, and family supports, including employment retention services, can last up to 9 months or longer. Three of the ten programs we examined used multiple elements of the sector-based career pathways approach to increase the skills of participants looking to enter a new field or increase their employability. Washington State’s I-BEST program was developed to help participants increase their literacy and basic skills while earning certificates or degrees in order to qualify for in-demand jobs. The program is offered in 34 community and technical colleges in the state. The program curriculum is based on regional job market demands and resources. A college must demonstrate local demand for a career program and show that those who complete the program will be able to exit it and receive an average wage of $13/hour or more. Program administrators said they believe their program is successful because of its team-taught, integrated service delivery model. Within I-BEST classrooms, two instructors work together to bolster basic skills while integrating career-related content into students’ curriculum. Since the program’s founding about 10 years ago, more than 200 career programs in fields such as health care, early childhood education, and advanced manufacturing have been approved by the Washington State Board for Community and Technical Colleges. The Minnesota FastTRAC program aims to move low-wage, low- skilled adults into credit-bearing training. Participants typically enter the program after being referred by various sources including local TANF and workforce programs. Participants are screened to determine their education level and needs. Similar to the I-BEST program, clients then go to classroom-based programs, in which instruction in math and reading is provided within the context of a profession like healthcare. Clients may also participate in some job shadowing during this period. Similarly, Kentucky officials perceived a need to increase educational attainment among the TANF population to improve their employability and launched the Ready to Work program about 16 years ago. The program is run statewide through the Kentucky Community and Technical College System. Unlike the I-BEST program, which focuses specifically on high-demand occupations, Ready to Work allows participants to select classes from the full catalogue of local courses. However, the program does offer counseling on job prospects in various career tracks. Participants also engage in on- or off-campus work study positions with public and private employers that are subsidized using TANF funds and connected to each participant’s program of study and career goal. All of the programs we examined used some form of assessment to identify participants’ service needs. Across the programs, factors assessed included skill levels, work history, interests, and physical and mental health. Additionally six of the 10 programs structured services or program offerings in two or more tiers or employment tracks for the purpose of better targeting the needs of participants. For example, the D.C. TANF program, which uses elements of a modified work-first approach, was redesigned in 2008-2009 in response to complaints that under the program’s “one size fits all” service delivery model, job-ready clients and those with significant barriers to employment were receiving the same services. Under the redesign, a continuum of services is available based on the client’s needs at assessment. Clients are categorized into a service delivery tier based on their education, experience, and skills, as well as barriers they may face (see fig 7). Although the programs we examined were all focused on work, they varied in the extent to which they were structured to help their states meet TANF’s work participation rate requirements, according to state and program officials. Contributing to the work participation rate, such as by requiring participants to meet specific federally-defined hours and activities requirements, was a goal of 5 of the 10 programs, for at least some portion of the participants served. For example, a Kentucky Ready to Work program administrator said its work study component is its biggest asset for meeting the work participation rate requirements. Students are placed in private-employment positions when possible and when positions are not available, administrators said students can work on campus or in volunteer positions to meet their hours requirements. In the D.C. TANF program, participants have different requirements depending on their service delivery tier. For example, job-ready participants are expected to meet both hours and activity requirements that are consistent with the federal work participation rate standards. Those who are less job ready must meet the hour requirements but may participate in any mix of activities even if it they do not count toward the District’s work participation rate. An administrator told us the D.C. TANF program was designed based on what they thought would work best to get participants employed and with the expectation that this will also result in meeting the work participation rate requirements. The work participation rate requirement was given less emphasis by the other five programs we examined, according to state and program officials. These administrators said their programs did not emphasize the work participation rate because of a conscious choice to prioritize other program goals, or because their programs served too few work-eligible TANF recipients to affect the state’s rate. Nearly all (9 of the 10 programs we examined) were components of larger TANF programs, meaning that states and localities may have a variety of other mechanisms for meeting the work participation rate requirements beyond these programs. Even without a strong emphasis on the work participation rate, these programs—like those that did emphasize the work participation rate—still maintained an employment focus and some offered activities that at least partially counted toward the work participation rate. Administrators in Los Angeles County’s subsidized employment program said they do not over-emphasize contributing toward the work participation rate requirements, but instead focus on placing participants in work. Similar to the D.C. TANF program—where addressing participants’ barriers and progressively increasing their engagement over time is thought to contribute to the work participation rate and increase long-term stable employment—Los Angeles County officials said if participants can be placed in employment, meeting the requirement is typically not an issue. Additionally, administrators said that because California allows clients to participate in some activities for longer durations than can be counted toward the federal requirement, they focus on keeping people engaged in work-related activities under the state rules. GAO, Standards for Internal Control in the Federal Government, GAO/AIMD-00-21.3.1 (Washington, D.C.: November 1999). regarding how activities related to the career pathways approach may be allowable under TANF rules could address misperceptions and encourage more widespread use of this approach by state and local TANF agencies. Nearly all of the 10 programs we examined that currently use elements of promising approaches drew on expertise beyond the state or local TANF agency (see table 2). Nine of these programs involved partnerships with organizations and agencies such as community college systems, workforce agencies, and nonprofit organizations or contracted vendors. These partnerships provided the selected programs with access to the experience and expertise of a range of partners. For example: Officials with the Los Angeles County TANF agency noted that partnership with a local workforce agency is a key facilitating factor in its subsidized employment program. The workforce agency has acted as the employer of record for payroll and insurance purposes, and had existing employer relationships, so the TANF agency did not need to develop these capacities. While Kentucky’s Ready to Work program is entirely funded by TANF, the program is administered by the state’s community college system, and its case managers are located on campus and familiar with the college environment. According to program officials, clients generally pay their tuition through federal Pell Grants, so Ready to Work case managers help clients complete financial aid applications and ensure they remain eligible for the grants. Officials with Ramsey County, Minnesota’s FAST program that seeks to engage individuals with disabilities in work reported that the experience and credibility of the initiative’s lead nonprofit partner was a key facilitating factor. They particularly valued the nonprofit’s previous experience working with both rehabilitation and employment services, as opposed to other contractors that have only operated in the TANF employment context. Six of the programs also received technical assistance from a federal agency or nonprofit, such as a policy research organization or foundation. For example, Minnesota officials cited participation in the federal Career Pathways Institute as an important step in the development of the state’s career pathways program.assistance activities connected them with many other programs that influenced elements of their work. Officials in Los Angeles and Ramsey County, Minnesota, said that taking part in an evaluation conducted by a research organization helped inform their program models, and Ramsey County officials said their participation had enabled them to access additional resources, such as training from national experts. Officials said participation in technical Decisions by state and local policymakers to dedicate funds for the selected programs also facilitated the use of promising approaches, according to program officials. We have previously reported that when federal TANF funds are allocated to states, states do not necessarily direct the funding to state welfare agencies, but may allocate funds to support various programs depending on legislative priorities (see fig. 8). Officials with 7 of the 10 programs we interviewed noted that state or local funding decisions had facilitated their use of promising approaches. example, officials with the District of Columbia TANF program noted that the allocation of additional funding has been the key factor that has allowed the agency to continue to support its modified work-first approach. Officials reported that the approach requires more staff resources to assess clients’ work readiness and determine appropriate activities than does the agency’s prior model of “one size fits all.” The allocation of additional resources has allowed the agency to support these staff. The allocation of additional funds by state and local governments also facilitated San Francisco’s use of a subsidized employment approach for TANF cash assistance recipients, according to program officials. They reported that these funds have supported the agency’s ability to increase its wage subsidy in an effort to attract higher- wage employers. In addition to federal TANF funds, selected programs reported receiving funds from state and local sources. However, some of these funds may have included federal TANF funds that were being allocated by state and local entities. In addition, some of these state or local funds may have been used by states to meet their TANF maintenance of effort requirement. As structured, the TANF program lacks incentives to encourage broader adoption of promising approaches by large numbers of state and local TANF agencies. The federal law that created TANF established a goal of increasing job preparation and employment, and included a provision addressing the development and evaluation of innovative approaches for reducing welfare dependency. However, several TANF program characteristics may not encourage states to adopt and test new approaches for increasing the employment and earnings of cash assistance recipients. HHS’s authority over many aspects of TANF is limited and it has not proposed legislative changes to address these areas. Without federal action, adoption and evaluation of promising approaches may continue to be limited to select states and localities, leaving TANF recipients in other locations without access to these promising approaches. 42 U.S.C. §§ 601(a), 617. Accordingly, HHS has limited authority to influence state program choices and cannot compel states to adopt particular approaches for their welfare-to-work programs. funds on a wide range of programs and services that are not necessarily related to welfare-to-work activities, as long as these services support one of TANF’s four statutory purposes. In a 2012 report, we found that states spent significant amounts of TANF funds on services such as child welfare or child care, and that state use of federal TANF funds for these and other services can create tensions and trade-offs in state funding decisions. As a result, any additional resources needed for implementing more costly promising approaches for TANF cash assistance clients may compete with other allowable uses of TANF funds. Officials with three programs we interviewed that exclusively use TANF funds to implement elements of promising approaches noted that their programs had been continuously funded for many years and suggested that it would be difficult to find funding for the programs were they beginning today. Additionally, the federal work participation rate requirements do not necessarily work as an incentive for states to implement certain promising approaches, according to our interviews and prior work. Some experts and HHS officials we interviewed suggested that limits on the amount of time that certain job readiness and training activities may be counted may discourage states from toward a state’s work participation rate pursuing approaches that involve longer-term treatment or education. We have previously reported the concerns of state and local TANF officials that these limits do not allow sufficient time to address barriers to work for clients who have been out of the workforce for an extended period of time or to train clients for higher-wage employment that will prevent them from needing assistance in the future. See 42 U.S.C. § 607(c). Officials with 5 of the 10 programs we interviewed that were using promising approaches also expressed concerns about the limits. For example, officials with Ramsey County Minnesota’s treatment and employment services program said the TANF work activities rules do not provide enough flexibility for families coping with mental illness in that the time limit on job search and job readiness assistance and the 20-hour minimum for core work activities are not realistic for all. As a result, officials said most clients in the program receive cash assistance through a state program rather than through TANF. Yet, serving such clients through a state program instead of TANF likely limits the program’s replicabilty in other state or local TANF agencies, as not all states may be willing to devote state funds for this purpose. Officials with Kentucky’s Ready to Work program reported that the 12-month limit on counting vocational educational training toward the work participation rate has caused the program to rely more heavily on its work-study subsidized employment component, which has increased the program’s cost. Further, one official said that while the subsidized work-study component is a critical program feature, the expectation that clients will work at least 20 hours a week once they have exhausted their allowable full-time education may affect academic outcomes. The work activity limits only apply to what states may count toward the work participation rate and do not prevent states from allowing clients to engage in these activities for longer periods of time. However, several experts we interviewed explained that many state policies on activities that are available to clients are shaped by the work participation rate requirements. That is, some states may design their programs to provide and support only those activities that count toward the work participation rate, while other states also provide and support activities that do not count toward the rate. In addition, we have previously found that states have relied on a combination of factors allowed in law to reduce the percentage of families they needed to engage in work to meet their work participation rate requirements. According to the most recent data available from fiscal year 2011, 23 states used these factors to reduce their required work participation rate for all TANF families from 50 percent to 0 percent. As a result, states may have less incentive to use promising approaches to engage hard-to-employ individuals in work activities, as they can meet their work participation rate requirements without engaging these individuals. In 2013, we noted that the work participation rate is complex and that any potential changes to the measure would likely have a profound impact on state TANF programs. Finally, interviews conducted for this report and our previous work indicate that state and local TANF agencies have little incentive to test the effectiveness of new approaches. While several program administrators cited short-term or anecdotal evidence that their programs may contribute to increases in employment and earnings, only 4 of the 10 programs had undergone or were currently involved in a rigorous impact evaluation. States are not required to conduct impact evaluations of their TANF programs under federal TANF law, although these evaluations can provide useful information on program effectiveness. A rigorous evaluation impact study with a random assignment design allows for comparison with a control group in order to distinguish the program’s influence from other factors. Officials with some of the programs that had not undergone rigorous evaluations said they had not participated in these studies because of concerns about administrative burden, the newness of the program, or that a portion of their caseload would not receive services or count toward the work participation rate. Local Minnesota FastTRAC contractors said long-term follow ups with clients who had left the program would be burdensome to staff, who need to focus on newer or existing participants’ needs. An official with the D.C. TANF program said it had not yet been evaluated formally because the program had only recently been redesigned. A Kentucky Ready to Work administrator said the program has not been rigorously evaluated through a random assignment study. The administrator said this was because the state TANF agency does not want to deny services to anyone who is eligible as a result of being assigned to a control group or miss an opportunity to contribute toward the work participation rate. An administrator at one site that was undergoing a rigorous impact evaluation said that because only half of study participants would be selected for the program, recruitment for the study was challenging. We have previously found that although HHS has a strong tradition of leading and supporting rigorous welfare research, there are fewer incentives for states to evaluate their programs under TANF than existed under the previous welfare program with its evaluation and funding provisions. Indeed, although HHS maintains an active research agenda, TANF agency participation in some recent and ongoing HHS evaluations has been limited. For example, none of the four sites in the recently- completed Enhanced Services for the Hard-to-Employ Demonstration and Evaluation were TANF agencies. In addition, in two ongoing evaluations of career pathways programs, participating sites consist primarily of community colleges and workforce agencies. According to HHS, these sites vary in the extent to which they serve TANF cash assistance recipients. An HHS official we interviewed reported that engaging TANF programs in evaluations of promising approaches is difficult because of the administrative burden on the state or locality. Officials added that HHS has no authority to require state agency participation in research and evaluation and no dedicated funding to provide states or localities incentives to participate. However, HHS officials also noted that often entities beyond TANF agencies serve TANF cash assistance recipients and research findings are still relevant for these partners. HHS has also taken steps to share research findings with local TANF programs, welfare researchers, and policymakers. For example, the agency has sponsored an annual Welfare Research and Evaluation Conference to share evaluations of welfare reform and formulate ways to incorporate these findings in the design and implementation of programs since 1998. HHS also makes research results available through presentations at other conferences related to welfare and low-income issues, their agency websites, and online announcements sent to subscribers. While the 10 state and local programs we examined are making use of some promising approaches for moving TANF recipients into employment and increasing their earnings, incentives are lacking for large numbers of state and local TANF agencies to adopt and test such approaches under the structure of the TANF program. This suggests a loss of opportunity. Data already suggest that state TANF programs currently engage recipients in a relatively limited range of activities and that more TANF recipients could be engaged in activities that are work-related. TANF’s goals include ending needy families’ dependence on government benefits by promoting work. However, the program’s block grant structure means that many policy decisions are decentralized, and HHS has limited authority in directing state programmatic choices. This tends to leave innovation to individual localities and states. Yet other allowable uses of TANF funds compete with the adoption of promising welfare-to-work approaches. Further, limited participation by TANF agencies in HHS evaluations may slow the development and adoption of new promising approaches, leaving TANF without a continuous improvement process. In announcing its intention to exercise waiver authority, HHS noted that the purpose of its proposed waivers was to test alternative and innovative strategies, policies, and procedures designed to improve employment outcomes for needy families. However, as HHS officials told us, no state has applied for a waiver since its announcement in 2012. While we are not taking a position on whether HHS has authority to issue waivers, it is clear that this strategy has not encouraged states to innovate and test new approaches. We recognize that PRWORA limits HHS’s authority over state program and funding choices and evaluations of state programs. Yet, HHS, as the federal agency that oversees TANF, is positioned to identify, suggest, and work in consultation with Congress on potential changes that would address the lack of incentives for states to adopt promising approaches. Absent federal action, adoption and evaluation of promising approaches may continue to be limited to select states and localities, leaving TANF recipients in other locations without access to these promising approaches. In the meantime, at least one promising approach—career pathways—appears to suffer from misperceptions about whether activities under the career pathways approach can be counted toward the work participation rate, or whether the career pathways approach is allowable with TANF funds. Without clarification that it is, indeed, allowable, TANF agencies may be discouraged from attempting it. To encourage broader adoption and evaluation of promising approaches and address impediments to the use of the career pathways approach among TANF agencies, we recommend that HHS should take the following two actions: In consultation with Congress, identify potential changes that would address the lack of incentives for states and localities to adopt promising approaches and then develop and submit a legislative proposal outlining those changes. Issue formal guidance to clarify how activities under the career pathways approach can be countable for the purpose of the work participation rate and that TANF funds may be used to finance the career pathways approach. We provided a draft of this report to HHS, the Department of Education, and the Department of Labor. The Departments of Education and Labor did not have comments. HHS provided written comments, reproduced in appendix V, in which the agency generally concurred with our recommendations. HHS also provided technical comments that we incorporated, as appropriate. HHS agreed with our recommendation that the agency consult with Congress on ways to address the lack of incentives for states and localities to adopt promising employment-focused approaches. HHS noted that in the Administration’s Fiscal Year 2015 Budget Request, it stated, “when Congress takes up reauthorization, the Administration will be prepared to work with lawmakers to strengthen the program’s effectiveness in accomplishing its goals. This effort should include using performance indicators to drive program improvement and ensuring that states have the flexibility to engage recipients in the most effective activities to promote success in the workforce, including families with serious barriers to employment.” The budget request also included a legislative proposal to increase incentives for states to implement or strengthen subsidized employment programs by creating a new initiative funded by diverting existing TANF Contingency Funds to this new use. We were aware of these points in HHS’s budget request and maintain that HHS should develop more concrete proposals to address the lack of incentives within the TANF program itself, and note that the agency need not wait for Congress to take up reauthorization to do so. HHS also noted that it has sponsored a large body of research that helps federal policy makers understand the wide variety of services provided by state TANF programs and provides credible information to state TANF decision- makers about possible innovative and effective approaches to serving their clients. At the same time, the agency acknowledged that gaps remain in the research on effective employment-focused approaches for low-income families and noted that additional research is needed. HHS also agreed with our recommendation that the agency issue formal guidance to clarify how the career pathways approach can be used by TANF agencies. HHS said that it will issue a formal Information Memorandum on the implementation of career pathways approaches in TANF, as part of the agency’s ongoing technical assistance efforts. It also elaborated on the various ways it has promoted career pathways and other employment-focused approaches in recent years, as we acknowledged in the report. In addition to the need for guidance, HHS said that the 12-month limit on counting vocational education training toward the work participation rate is a constraint to the implementation of career pathways models in TANF programs. However, as we note in our report, states may allow TANF recipients to combine education with other work activities that count toward the work participation rate. Better clarification of this fact through formal guidance should help to address state misperceptions that the time limit on counting vocational education training prohibits the use of career pathways approaches. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Health and Human Services, Education, and Labor, and interested congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-7215 or brownke@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Our report (1) reviews some approaches that have been identified as holding promise to engage TANF recipients in employment and increase their earnings and examines ways in which selected states and localities have used them, and (2) identifies factors that influence the use of these and other promising approaches. To first identify promising approaches, we reviewed research summaries and syntheses of rigorous research on approaches for engaging TANF recipients in employment and increasing their earnings. We primarily focused on studies that used a random assignment (experimental) research design. This type of design compares the outcomes for groups that were randomly assigned either to the treatment or to a nonparticipating control group before the intervention, in an effort to control for any systematic difference between the groups that could account for a difference in their outcomes. A difference in these groups’ outcomes is believed to represent the program’s impact. To help compile the research we reviewed, we conducted a literature search of 12 research databases, including EconLit, ProQuest Research Library, and Social Services Abstracts, using search terms such as “welfare to work.” Our intent was not to describe all possible promising approaches, but rather to identify some key approaches. We considered approaches to be “promising” if a rigorous evaluation had determined that a program using this approach contributed to increased employment, income, or earnings, or reduced welfare receipt. Not all programs using the approach had to yield positive results in order for the approach to be considered promising. We realize that there is variation in how these approaches are implemented. Although a comprehensive career pathways program has not yet been rigorously evaluated, we included this approach because it was based on a rigorous evaluation of sector-based career training and lessons from other prior rigorous evaluations. We sought input on our list of approaches from researchers with MDRC, a nonprofit, nonpartisan education and social policy research organization. We also validated our list of approaches through the process of seeking expert nominations for programs currently using these approaches. For example, based on our review of the literature, we initially identified earnings supplements—the practice of supplementing low-wage workers’ earnings with incentives to encourage employment and employment retention—as a possible promising approach. However, because we did not find any current examples of this approach being used to serve TANF cash assistance recipients, we did not include it in our final list of promising approaches. To identify state and local programs that are currently using these approaches to serve TANF cash assistance recipients, we interviewed cognizant researchers; officials from the Department of Health and Human Services (HHS), the Department of Labor, and the Department of Education; and others with TANF expertise. We selected 12 experts based on their knowledge of the research and efforts under way at the state and local level, as well as to reflect a range of perspectives on welfare-to-work issues. We identified them through our literature review and through referrals from other experts. We asked the experts and agency officials to identify state and local programs that are currently using the promising approaches we identified through our literature review to serve TANF cash assistance recipients. We asked the experts to indicate the specific reasons why they recommended these programs and whether evaluation results or outcome data were available for the programs; however, these programs have generally not been rigorously evaluated. From the programs identified by experts, we selected 10 programs to profile (located in 6 states—California, Kentucky, Minnesota, New York, Utah, and Washington—and the District of Columbia) to reflect diversity in approaches used, economic conditions, demographics, and geographic regions. We also considered how the program was connected to TANF and whether the program was in an area where TANF is state- or county- We did not report on work participation rates for the states administered.in which these programs are operated because some are state programs and others are locally administered. Work participation rates are calculated only on the state level by HHS, based on data reported by the states, and generally available after the end of the fiscal year. We conducted in-depth interviews, by phone and through four site visits, with program administrators and state and local TANF officials to learn about key program features that help increase employment and earnings. For 3 programs, we also interviewed local contractors. We obtained views from experts; federal, state, and local officials; and program administrators on factors that facilitate the use of promising approaches and that could broaden their use. We assessed information and communication HHS provides to states related to promising approaches against federal internal control standards. We obtained budget, performance, and other program data for the programs we selected, as well as HHS administrative data on states’ engagement of TANF recipients in prescribed work activities. We assessed the reliability of these data by reviewing related documentation and interviewing knowledgeable officials and determined they were sufficiently reliable for providing contextual information on these programs and describing work participation trends, respectively. We also reviewed relevant GAO, HHS Office of Inspector General, and HHS reports, as well as relevant federal laws, regulations, and guidance. We conducted this performance audit from January 2014 to November 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In meeting the 30-hour average weekly requirement, individuals must spend at least 20 of the hours in certain activities (referred to as core activities). Any additional hours needed to meet the requirement can come from non-core or core activities. Employment Retention and Advancement Project (ERA): This study ran from 1998 to 2011. Under the study, Health and Human Services issued planning grants to 13 states to develop new programs. The goal was to test strategies for helping employed individuals keep their jobs and advance in the workforce. Only 3 of the programs were found to increase employment retention and advancement: the Texas sites (Corpus Christi and Fort Worth), Chicago, and Riverside PASS ERA programs. An ERA report was published in April 2010, “The Employment Retention and Advancement Project: How Effective Are Different Approaches Aiming to Increase Employment Retention and Advancement? Final Impacts for Twelve Models,” which can be viewed at: http://www.acf.hhs.gov/programs/opre/resource/the-employment- retention-and-advancement-project-how-effective-are. Among the participating sites, 8 were led by TANF agencies at either the state or local level. The Texas ERA program offered monthly stipends of $200 to former TANF recipients working at least 30 hours per week. The Chicago program was a work-focused advancement program, offering targeted job search assistance and help indentifying and accessing career ladders. These services were provided to participants by staff in a private, for- profit firm, and TANF recipients were required to have regular contact with program staff to continue to receive TANF benefits. In contrast, the Riverside PASS ERA program provided voluntary, individualized retention and advancement services provided primarily by three community-based organizations and community colleges. Each of these programs targeted a specific population’s service needs. Enhanced Services for Hard-to-Employ Demonstrations and Evaluations: This demonstration tested several strategies for helping hard-to-employ parents find and sustain employment. One of the four sites in the study included TANF participants: the Transitional Work Corporation in Philadelphia conducted a subsidized employment demonstration exclusively for TANF clients. Other sites included programs in Kansas, Missouri, New York, Pennsylvania, and Rhode Island. This 10-year evaluation was completed in 2012. Findings from this study were used to influence the design of two new federal subsidized employment initiatives. Final evaluation reports can be viewed at: http://www.mdrc.org/publication/what-strategies-work-hard-employ. Temporary Assistance for Needy Families/ Supplemental Security Income Disability Transition Project (TSDTP): This study is an effort to identify the extent of overlap between TANF and SSI programs and populations and develop pilot programs aimed to improve a variety of outcomes for individuals with disabilities and work-limiting barriers to employment. The study is a collaboration between ACF and the Social Security Administration through a contract with MDRC. This study is complete. Reports were published throughout 2013 and 2014. TSDTP included Ramsey County, Minnesota—a program highlighted in this GAO report—which participated in a pilot-phase experiment to increase employment among TANF participants with work limitations and disabilities. Los Angeles County, California and Muskegon County, Michigan were also involved in the study, implementing pilot tests of approaches to serving individuals with disabilities either through the provision of services to TANF participants with work-limiting barriers, streamlining the Supplemental Security Income application process, or improving coordination between the two systems. Behavioral Innovations to Increase Self-Sufficiency (BIAS) study: Launched in 2010, BIAS is the first major opportunity to apply a behavioral economics lens to programs that serve poor families in the United States. The goal of the study is to learn how tools from behavioral science can improve the well-being of low-income children, adults and families. The study is being conducted by OPRE and MDRC in partnership with behavioral experts. One site within the study is the Los Angeles county TANF agency. Results are planned for rolling release in 2014 and 2015. Two reports on BIAS were issued in April and August of 2014. Health Professions Opportunities Grant (HPOG): HPOG was established by the Patient Protection and Affordable Care Act with the goal of providing training programs in high-demand healthcare professions to TANF recipients and other low-income individuals. In 2010, the Administration for Children and Families (ACF) awarded five-year grants to 32 grantees in 23 states. The program includes post-secondary educational institutions, workforce investment boards, state and local government agencies, and community-based organizations. Two of the 27 non-tribal HPOG grantees are Departments of Human or Social Services agencies and also administer the TANF program. One of the five tribal sites also administers the Tribal TANF program. However, a number of HPOG grantees also serve as the employment services providers for TANF agencies. ACF’s Office of Planning, Research, and Evaluation (OPRE) is evaluating HPOG demonstration projects. An interim analysis of the impacts of HPOG is expected in June 2016. Innovative Strategies for Increasing Self-Sufficient Project: The Innovative Strategies for Increasing Self-Sufficiency evaluation began in 2007 as a multi-site, random-assignment evaluation of promising strategies for increasing employment and self-sufficiency among low- income families. It is under the direction of ACF. This study focuses on the career pathways approach as the main intervention. One of the nine ISIS sites—Seattle/King County Workforce Development Corporation—is a contracted employment services provider for a local TANF agency and serves both TANF and non-TANF clients. Other sites in the study include: Des Moines Area Community College, I-BEST in Washington State, Instituto del Progreso Latino, Madison Area Technical College, Pima Community College, San Diego Workforce Partnership, and Valley Initiative for Development and Advancement. Job Search Assistance (JSA) Strategies: This study was launched by OPRE in fall 2013 and features a multi-site, random-assignment evaluation to measure the relative impact of specific job search services offered by TANF programs on short-term labor market outcomes such as earnings and time to employment. The JSA evaluation aims (1) to provide information about the relative impacts of various JSA services and the manner in which agencies provide them, and (2) to provide actionable and policy-relevant feedback to the TANF field, including federal TANF policymakers, state and local TANF administrators, and frontline caseworkers. Site recruitment is ongoing, and initial findings could be reported as early as the end of 2016. Subsidized and Transitional Employment Demonstration (STED): The STED study was launched by ACF in 2010 with the goal of demonstrating and evaluating the next generation of subsidized employment models for critical low-income populations. The project is led by MDRC, under contract with ACF, and examines strategies for providing counter-cyclical employment and for successfully transitioning individuals from short-term subsidized employment to unsubsidized employment. STED is being conducted in coordination with the Department of Labor’s Enhanced Transitional Jobs Demonstration Project. Random assignment began at the first sites in early 2012 and continued through late 2013. Of the seven STED sites, three are TANF agency-operated including Los Angeles, San Francisco, and Minnesota. The evaluation is scheduled to run through 2017. Bloom, Dan. Transitional Jobs: Background, Program Models, and Evaluation Evidence. Prepared by MDRC for the Department of Health and Human Services. Washington, D.C.: 2010. ———, Cynthia Miller, and Gilda Azurdia. Results from the Personal Roads to Individual Development and Employment (PRIDE) Program in New York City. MDRC. July 2007. ———, Pamela J. Loprest, and Sheila R. Zedlewski. TANF Recipients with Barriers to Employment. Washington. D.C.: Urban Institute, August 2011. Derr, Michelle and LaDonna Pavetti. Assisting TANF Recipients Living with Disabilities to Obtain and Maintain Employment: Creating Work Opportunities. Prepared by Mathematica Policy Research, Inc. for the U.S. Department of Health and Human Services. Washington, D.C.: February 2008. Greenberg, David H., Victoria Deitch, and Gayle Hamilton. “A Synthesis of Random Assignment Benefit-Cost Studies of Welfare-to-Work Programs.”Journal of Benefit-Cost Analysis, vol. 1, issue 1, article 3 (2010). Gueron, Judith M. and Gayle Hamilton. The Role of Education and Training in Welfare Reform. The Brookings Institution, Policy Brief no. 20 (April 2002). Hamilton, Gayle. Improving Employment and Earnings for TANF Recipients. Prepared by the Urban Institute for the Department of Health and Human Services. Washington, D.C.: March 2012. ———. Moving People from Welfare to Work: Lessons from the National Evaluation of Welfare-to-Work Strategies. Prepared by MDRC for the Department of Health and Human Services and Department of Education. Washington, D.C.: 2002. Hendra, Richard, Keri-Nicole Dillman, Gayle Hamilton, Erika Lundquist, Karin Martinson, and Melissa Wavelet. The Employment Advancement Project: How Effective Are Different Approaches Aiming to Increase Employment Retention and Advancement? Final Impacts for Twelve Models. Prepared by MDRC for the Department of Health and Human Services. Washington, D.C.: April 2010. Kirby, Gretchen, Heather Hill, LaDonna Pavetti, Jon Jacobson, Michelle Derr, and Pamela Winston. Transitional Jobs: Stepping Stones to Unsubsidized Employment. Washington, D.C.: Mathematica Policy Research, Inc., April 2002. Maguire, Sheila, Joshua Freely, Carol Clymer, Maureen Conway, and Deena Schwartz. Tuning In to Local Labor Markets: Findings from the Sectoral Employment Impact Study. Philadelphia, PA: Public/Private Ventures, 2010. Roder, Anne, and Mark Elliott. Stimulating Opportunity: An Evaluation of ARRA-Funded Subsidized Employment Programs. Economic Mobility Corporation, New York, NY: September 2013. Werner, Alan, Catherine Dun Rappaport, Jennifer Bagnell Stuart, and Jennifer Lewis. Literature Review: Career Pathways Programs, OPRE Report #2013-24, Office of Planning, Research and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services (Washington, D.C.: June 24, 2013). In addition to the contact named above, Gale Harris (Assistant Director), Kristen Jones (Analyst-in-Charge), Holly Hobbs, and Michael Pahr made significant contributions to all aspects of this report. Also contributing to this report were James Bennett, Sue Bernstein, Ed Bodine, Alexander Galuten, Ashley McCall, Sheila McCoy, Almeta Spencer, and Walter Vance.
The TANF block grant requires states to engage a certain percentage of work-eligible cash assistance recipients in specified work-related activities, such as job search assistance and training. Yet, data suggest that more TANF recipients could receive assistance that would help them gain employment and reduce their dependence. GAO was asked to provide examples of what some states are doing to achieve these goals and how to expand these efforts. This report (1) reviews some approaches that have been identified as holding promise for engaging TANF recipients in employment and increasing their earnings and examines ways in which selected states and localities have used them, and (2) identifies factors that influence their use. To first identify promising approaches, GAO reviewed summaries and syntheses of rigorous research on approaches that increase employment and earnings, and profiled 10 state and local programs that were nominated by experts familiar with welfare research and state and local efforts, and that were selected to represent a range of approaches. GAO also reviewed relevant federal laws, regulations, and agency guidance, and interviewed agency officials and experts with a range of views. The 10 state and local programs GAO examined used various promising approaches to help Temporary Assistance for Needy Families (TANF) cash assistance recipients gain employment by meeting a range of participant needs. These approaches included the use of subsidized employment, employment alongside treatment for a health condition, and training for high-demand jobs. For example, for individuals in need of additional work experience, San Francisco's TANF program has provided subsidies to employers to place participants in temporary, wage-paying jobs. To help individuals with mental and physical disabilities and substance abuse problems, nonprofit contractors for New York City's TANF program have provided individualized assessment and treatment, often combined with employment. To prepare individuals with various skill levels for high-demand jobs, Minnesota and Washington have used a career pathways approach of combining occupation-specific training with basic skills education and support services. However, experts told us that some states have a misperception that this approach is not allowable under TANF rules, even though the Departments of Labor, Education, and Health and Human Services (HHS) support its use. HHS told us that states could still meet program requirements while using this approach, but the agency has not issued formal guidance clarifying this. Internal control standards for the federal government state that information should be communicated to managers in a form that enables them to carry out their responsibilities. As a result of these misperceptions, the career pathways approach may be underused by TANF agencies and TANF recipients could miss out on the potential benefits of this approach. Expertise and dedicated funds facilitated use of these promising approaches, but the federal TANF program, itself, lacks incentives for their wider adoption. Of the 10 programs GAO examined, 9 drew on the expertise of partner organizations—including community college systems, workforce agencies, and nonprofits. The programs also benefitted from decisions by state and local policymakers to dedicate funds—including TANF funds—for the selected programs, according to officials. However, incentives for large numbers of state and local TANF agencies to adopt and test promising approaches are lacking under the structure of the TANF program for several reasons. First, many program design and funding choices are left to the states, and GAO's prior work has shown that state use of TANF funds for more costly welfare-to-work approaches can compete with other allowable uses of TANF funds. Second, TANF's main performance measure does not necessarily encourage agencies to use certain approaches that incorporate longer-term education and training or treatment services, although states are not prohibited from doing so. Third, little incentive exists for TANF agencies to evaluate their programs. HHS's authority over many aspects of TANF is limited and it has not proposed legislative changes to address these areas. Yet, because HHS oversees TANF, it is positioned to identify, suggest, and work in consultation with Congress on potential changes that would better address the lack of incentives for the use of promising approaches by states and to better meet the TANF goal of increasing employment. Without federal action, adoption and evaluation of promising approaches may continue to be limited to select states and localities, leaving TANF recipients in other locations without access to these promising approaches. GAO recommends that HHS should issue guidance to clarify how the career pathways approach can be used by TANF agencies and identify potential changes to address the lack of incentives in the TANF program. HHS agreed with GAO's recommendations.
Phosphate is used in the manufacture of a variety of products, including toothpaste, soft drinks, and dishwashing and laundry detergents. Over 95 percent of the phosphate produced in the United States, however, is used in the manufacture of fertilizers and animal feed supplements. This section provides information on phosphate mining in Idaho, the phosphate-leasing process, the mine plan approval process, the Clean Water Act permitting process, and the CERCLA assessment and remediation process. Roughly 12 percent of the phosphate currently produced in the United States comes from the five active mines located in southeastern Idaho on lands managed by BLM, the Forest Service, the State of Idaho, and private landowners. In addition, phosphate mining occurred historically on nearby lands in Idaho leased by the Shoshone-Bannock Tribes on the Fort Hall Indian Reservation. The entire area of southeastern Idaho is at the center of the Western Phosphate Field that extends into six western states. Figure 1 shows the location of the field. Three mine operators currently mine phosphate at the five active mines in southeastern Idaho. At each of these mines, operators use drilling and blasting to expose the layers of phosphate ore so that it can be excavated and hauled by truck, train, or pipeline to a facility for processing.these operators process the phosphate ore into fertilizer products, while the third produces elemental phosphorus for use in herbicides. To access the phosphate ore, the mine operators must also remove the overburden—that is, the layers of rock that overlay, or in some cases are layered between, the phosphate ore. The overburden was historically placed in external waste dumps or used as backfill in mine pits or in nearby valleys, creating what are known as cross-valley fills. Figure 2 shows an active phosphate mine in southeastern Idaho, and figure 3 shows an inactive phosphate mine with a cross-valley fill. After horses grazing downstream from a cross-valley fill on federal land became sick and had to be euthanized in 1996, it was discovered that much of the overburden at phosphate mines in Idaho contains high concentrations of selenium—a naturally occurring element that in trace amounts is essential to the normal functioning of cells in animals but that can be poisonous in large concentrations. The selenium present in the overburden can be transported by rain and snow into the groundwater or into rivers and streams, or picked up by the roots of plants growing on the waste piles. The uptake of selenium contamination in vegetation or the frequent ingestion of selenium by animals can cause it to build up over time in a process known as bioaccumulation. BLM officials estimate that over 600 head of livestock have died from selenium poisoning since 1996 in the area—including a 2005 incident involving the deaths of over 30 sheep near a mine on federal land. Adverse effects due to selenium contamination have also been documented in birds and aquatic animals such as fish and invertebrates. Figure 4 shows the phosphate mining process and how it can result in the release of selenium. In southeastern Idaho, selenium contamination has been measured at 3 of the 5 active mines, and at all 13 of the inactive mines. Table 1 shows the 18 phosphate mines in southeastern Idaho and their production status, and the locations where selenium contamination and livestock deaths have occurred. See appendix II for more detailed information on the acres and surface land ownership of these mines. BLM issues phosphate leases under the Mineral Leasing Act of 1920. BLM is responsible for leasing on federal lands, but it must consult with the agency having jurisdiction over the surface, such as the Forest Service, with respect to surface protection and reclamation requirements. BLM will decline to issue a lease for phosphate mining if it is inconsistent with an applicable land-use plan. According to BLM officials, most of the 86 phosphate leases in southeast Idaho were issued over 50 years ago, and BLM last held a competitive lease sale in 1991. However, additional lands have been leased through lease modifications—a non-competitive process whereby a mine operator requests that BLM expand an existing lease to include lands adjacent to an active or proposed mine. Leases are for indefinite terms; however, BLM may make reasonable adjustments to the lease conditions once every 20 years. Lessees have the right to challenge the terms and conditions proposed by BLM through readjustment, including a right of appeal to the Interior Board of Land Appeals.need to obtain a special-use permit to use National Forest System land for off-lease activities, such as the construction of access roads. Mine operators must pay a royalty of at least 5 percent of the gross value of phosphate rock and associated minerals produced, as well as annual rent of up to $1.00 per acre. For leases that have not been in production for 6 or more years, mine operators pay an annual royalty of $3 per acre that includes rent. According to officials with Interior’s Office of Natural Resources Revenue, the federal government collected roughly $7 million in royalties and rents from phosphate mine operations on federal land in fiscal year 2010. Before conducting any operations under a lease, an operator must submit to BLM a mine plan detailing the operations to be conducted. The mine plan outlines basic mine operations and mine-specific production measurement methods for calculating royalties. The mine plan also is to include information on the environmental aspects of the proposed mine, such as pollutants that may enter waters, measures to be taken to prevent air and water pollution and damage to fish or wildlife, and a reclamation plan. The reclamation plan details the steps the operator will take to restore the land to its previous condition, including action such as recontouring hillsides, removing roads and structures, and planting vegetation. BLM is required to consult with other federal agencies, such as those having jurisdiction over surface land, prior to approval and may require modifications to an approved plan if conditions warrant. For example, BLM may seek to modify a mine plan if selenium is discovered at the site after operations begin. After the approval of a mine plan, BLM sets a financial assurance amount for the mining operation. Financial assurances may cover individual leases or all leases for a single lessee in a state or nationwide. The minimum amount for an operator is $5,000 for individual leases, $25,000 for all of the operator’s leases statewide, or $75,000 for all of the operator’s leases nationwide. BLM may enter into agreements with states whereby any financial assurance provided to a state would also satisfy BLM’s requirements. BLM will release a financial assurance when it determines that the operator has (1) paid all royalties, rents, penalties, and assessments; (2) satisfied all lease obligations; (3) reclaimed the site; and (4) taken effective measures to ensure that the mineral prospecting or development activities will not adversely affect surface or subsurface resources. The Forest Service may also require operators to post financial assurances for activities associated with special-use permits. In association with the mine approval processes, BLM—in cooperation with the Forest Service if National Forest System land is involved—must evaluate the proposed mine under NEPA. NEPA requires federal agencies to evaluate the likely environmental effects of a proposed project using an environmental assessment or, if the project is likely to significantly affect the environment, a more detailed environmental impact statement (EIS). EPA officials told us that EPA is required to review, and issue written comments on, each draft EIS, which BLM may accept or reject. As part of the NEPA process, and also to comply with the Endangered Species Act, BLM and the Forest Service may also undertake a biological assessment to identify endangered or threatened species and critical habitat that may be affected by mine operations. If BLM and the Forest Service determine that a mine may affect an endangered or threatened species, FWS may issue a biological opinion as to whether the activity is likely to jeopardize the continued existence of the species. If FWS finds that the activity will not jeopardize the species, its opinion will still list measures that can be taken to minimize impacts on the species. The outcomes of these analyses may affect BLM’s final decision on an operator’s mining plan. Mine operators may also need to work with other agencies to obtain additional permits or certifications before they can begin mining operations. For example, operators must obtain a permit under section 404 of the Clean Water Act from the Corps for the discharge of dredged or fill material into waters of the United States at specified disposal sites. Such discharges can include disposal of mine overburden. In addition, operators must obtain a permit under Section 402 of the Clean Water Act from EPA for discharges of storm water runoff that is contaminated by contact with certain materials such as overburden. Under such permits, operators may need to implement technology-based controls to protect waters, but operators may also be required to implement other controls based on the quality of the water into which they are discharging. In addition, under section 303(d) of the Clean Water Act, states must establish a Total Maximum Daily Load (TMDL) for any water body that cannot meet applicable water quality standards even after technology- based controls are applied to sources of water pollution. The TMDL represents the total amount of a pollutant that can be discharged into a water body each day without exceeding the water quality standard for that water body. The state of Idaho has identified selenium as a substance impairing water quality in some of its waters, but it has not yet established any TMDLs for selenium. provides states with the opportunity to object to the issuance of federal permits and licenses, including section 404 or 402 permits that may affect water quality in the state. Accordingly, an operator seeking a federal permit for a project that may affect water quality in Idaho must also seek section 401 certification for the proposal from IDEQ. Section 303(d) of the Clean Water Act does not provide any specific deadline for the development of TMDLs. Environmental contamination discovered at a mine may require remediation under CERCLA, the federal government’s principal program to respond to releases or substantial threats of releases of hazardous substances, pollutants, or contaminants which may present an imminent and substantial danger to the public health or welfare. Under CERCLA, the federal government has the authority to compel parties responsible for contaminating sites to clean them up, or to conduct cleanups itself and then seek reimbursement from the responsible parties. The National Priorities List (NPL) is EPA’s list of the nation’s most contaminated sites, and cleanups of these sites are typically expensive and lengthy. For NPL sites on Forest Service or BLM land, the land management agencies and EPA work together under interagency agreements to implement response actions. For non-NPL sites on Forest Service or BLM land, the land management agencies take the lead on implementing CERCLA response actions, except for emergencies which have been delegated exclusively to EPA. In enforcing CERCLA, federal agencies generally attempt to reach an agreement—known as a settlement agreement—with responsible parties (such as mine operators or other entities) to perform and pay for site cleanups once contamination has been discovered. Under these agreements, responsible parties may be required to post a financial assurance to ensure the performance of agreed-upon cleanup actions. However, there are currently no regulations that require mine operators to provide such financial assurances; agencies and responsible parties negotiate these terms in each settlement. Under CERCLA, EPA is required to issue regulations requiring certain businesses that handle hazardous substances to demonstrate their ability to pay for environmental cleanup costs, but the agency has not yet issued such regulations. However, the agency expects to propose such a rule for certain types of mining, which could include phosphate mining, in 2013, according to EPA officials. EPA has model settlement agreements to help guide negotiations. After contamination has been identified, the agency taking the lead on the cleanup initiates a process to investigate the extent of the contamination, decide on the actions that will be taken to address contamination, and implement those actions. The CERCLA program has two basic types of cleanup: (1) cleanups under the removal process, which generally address short-term threats, and (2) cleanups under the remedial action process, which are generally longer-term cleanup actions. Removal actions include (1) time-critical removals for threats requiring action within 6 months, and (2) non-time-critical removals for threats where action can be delayed to account for a 6-month planning period. As shown in figure 5, the non-time-critical removal process involves three primary phases, (1) a site evaluation, including site investigation and engineering evaluation/cost analysis, to characterize the site and identify and analyze removal alternatives; (2) selection and implementation of the removal action; and (3) monitoring and maintenance. The remedial action process begins with a remedial investigation and a feasibility study to characterize site conditions, assess the risks to human health and the environment, and to evaluate various options to address the problems identified, among other things. These findings and decisions are documented in a record of decision. Implementation of the remedial action is divided into two parts: (1) remedial design, a further evaluation of the best way to implement the chosen remedy; and (2) remedial action, the implementation of the remedy selected. When physical construction of all remedial actions is complete and other criteria are met, the lead agency deems the site to be construction complete. Most sites then enter an operation and maintenance phase, wherein the responsible party or the state maintains the remedy, while the lead agency conducts periodic reviews to ensure that the remedy continues to protect human health and the environment. For example, at a mine site with piles of overburden contaminated with selenium, the remedial action could consist of building a cap over the contaminated soil, while the operation and maintenance phase would consist of monitoring and maintaining the cap. The remedial action process is a more transparent and comprehensive process with more distinct steps than the removal action process. For example, CERCLA and its implementing regulations provide more opportunities for the public to participate in the remedial action process, including participation in site-related decisions, than are required in the removal action process, which may be limited to a single comment period. The remedial investigation/feasibility study process is subject to more- detailed data requirements than the site evaluation process for a removal action, and under section 121 of CERCLA, remedial actions generally require completed sites to achieve certain cleanup standards, which is not necessarily the case for removal actions. Finally, the remedial action process favors permanent remedies over short-term abatement. Figure 6 shows the remedial action process. Responsibility for selenium contamination in southeastern Idaho has been the subject of a recent CERCLA lawsuit. Under CERCLA, a party can be held liable for cleanup costs as an owner or operator of a facility where there was a release of hazardous substances or if the party arranged for disposal of hazardous substances. In 2009, a phosphate mine operator sued to compel the federal government to share the costs of cleaning up contamination under CERCLA at four mines, asserting that the government was an owner, arranger, and operator of the waste disposal sites at those mines. In the first step of the litigation, the court held in 2011 that the government was an owner, arranger, and operator. Specifically, the court found that the government owned the mine site, owned the middle waste shale that is the source of the hazardous substance involved (selenium), had the authority to control the disposal of that substance, and exercised some actual control over the disposal of that substance. Furthermore, the government managed the design and location of waste dumps at the mines and regularly inspected the mines to ensure compliance with the mining plans and waste disposal guidelines. The court did not determine the amount of cleanup costs the government owes, deferring that to a subsequent phase of the litigation. The parties have since agreed to settle the issues remaining in the case. Federal agencies have taken steps to strengthen their oversight of phosphate mining on federal land since selenium contamination was discovered in 1996 by requiring more detailed environmental analysis and reclamation plans, requiring financial assurances that provide more coverage, hiring additional staff, and revising land-use plans. Nevertheless, oversight gaps remain that limit the agencies’ ability to effectively address contamination from phosphate mining operations. These gaps include inadequate documentation of BLM’s financial assurance practices, inconsistent coordination on financial assurances, an ineffective process for resolving agency disagreements on lease terms and conditions, and insufficient mechanisms for overseeing activities being conducted by third party contractors. In an effort to reduce the likelihood that new and ongoing mines will result in additional sources of selenium contamination and improve the management of ongoing CERCLA cleanups, BLM and the Forest Service have taken the following steps to strengthen their oversight of phosphate mining operations. BLM requires a more detailed environmental analysis for approving mine plans. According to BLM officials, in 1998 the agency began to prepare a full site-specific EIS when evaluating new mine plans, instead of relying on a 1977 areawide programmatic EIS and conducting site-specific environmental assessments that were more limited in scope, as had been done previously. Under the new EIS, officials told us, they conduct enhanced environmental testing and analysis to understand the potential sources of selenium at proposed mine sites, investigate how proposed mines would affect surface water and groundwater, and evaluate engineering models for options to prevent or mitigate the contamination. BLM requires more comprehensive reclamation plans. BLM officials told us that the agency now requires mine operators that propose new mine sites to develop more comprehensive reclamation plans than operators did previously. For example, mine operators are now generally required to agree to backfill open mine pits and not construct cross-valley fills; separate selenium-contaminated waste from other waste; engineer systems of natural or synthetic caps and covers for both reducing the infiltration of surface water into waste piles that can contribute to groundwater contamination and preventing the uptake of selenium by the roots of vegetation planted for reclamation; and select plants for revegetating mine sites that minimize selenium uptake and reduce the risk of ingestion by livestock and wildlife. In addition, the new mine plans provide for enhanced inspection of the mine operations in order to, among other things, monitor groundwater to detect selenium contamination early and oversee the construction of waste pile caps and covers. Since the state of Idaho decided to list parts of the Blackfoot River as impaired for selenium under section 303(d) of the Clean Water Act in 2002, BLM has also been requiring mine operators to demonstrate through their mine and reclamation plans that the mines will not add any measurable selenium contamination to the river and its tributaries. BLM requires full-cost financial assurances for new mines. BLM officials told us that the agency decided in 2001 to set financial assurances for new mining operations using a formula based on the estimated full cost of reclaiming the site—meaning that, if the mine operator defaults on its reclamation obligation, the financial assurance would be adequate for BLM to hire contractors and incur oversight and overhead costs to perform the work—plus 3 months of estimated royalties. In the past, BLM officials told us, as agreed with the state of Idaho, BLM generally set financial assurance amounts at not more than $2,500 per acre of surface disturbance, regardless of the potential cost of reclamation. The financial assurances for the new mines are substantially higher than those set under the per-acre calculation. For example, one mine approved in 2011 was required to provide a financial assurance valued at nearly $22 million; based on general past practices, that financial assurance would have been set at about $1.7 million, according to our analysis. The adequacy of these larger financial assurances for reclamation has not yet been tested, however, because all of the mines at which they have been required are still active. BLM officials told us that they generally have not increased or decreased financial assurance amounts for inactive phosphate mine operations because most of those mines will require further remediation for selenium contamination under CERCLA, and the costs to reclaim and remediate those sites—which would form the basis for any adjustment in the financial assurance amounts—have not yet been estimated. BLM and the Forest Service have made changes to readjusted leases, and BLM has denied lease relinquishments. To help ensure that phosphate mine operators are liable for any environmental damage they may cause, BLM and the Forest Service jointly devised a lease stipulation that has been included in every lease readjustment since 2002, covering a total of 63 leases. Under the new stipulation, the mine operator agrees to pay for certain environmental damage it causes and, when requesting that a lease be relinquished, to conduct an environmental site assessment of the mine site to identify any possible contamination. BLM also made other changes, such as the addition of a notification to phosphate lessees that details the information that should be included in a proposed mine plan, and language stating that lessees will be required to comply with CERCLA, as well as other environmental laws. BLM officials told us they intend to use the environmental site assessment in evaluating whether to allow the operator to relinquish the lease. Until the lease is relinquished, BLM can maintain the financial assurance associated with the lease, and any terms and conditions of the lease including reclamation obligations remain in effect. BLM officials told us that their practice has been to deny lease relinquishments if there are any indications that further CERCLA cleanup may be necessary to address selenium contamination, and that they last relinquished a phosphate lease in 1997 at a site where no phosphate production had occurred. No phosphate mine operators requested a lease relinquishment from 1993 through 2003, and since 2003, BLM has not approved any of the 8 requests it has received for lease relinquishment, according to BLM officials. BLM and the Forest Service have taken some steps to supplement staff resources. Since selenium contamination was discovered in 1996, BLM and the Forest Service have combined their mine oversight field staff in southeastern Idaho into a minerals branch under an interagency initiative known as Service First. According to BLM officials, this initiative has allowed the agencies to eliminate redundant mine oversight, increase efficiency, and accomplish more work. BLM and the Forest Service have also hired additional staff to help manage the increased workload associated with overseeing new and ongoing mines. For example, the Forest Service has, among other things, created a new position to oversee its selenium remediation efforts. This position is currently held by a former EPA employee with 15 years of experience managing CERCLA cleanups. In addition, BLM has arranged to have mine operators pay for third- party contractors to support BLM staff in certain aspects of mine oversight. For example, mine operators have paid contractors to prepare EISs, and BLM has directed two mine operators to enter into and pay for contracts with third parties to provide monitoring and other services associated with constructing and implementing cover systems for waste rock. BLM has also asked mine operators to reimburse it directly to fund two BLM positions to conduct mine oversight. Despite these efforts, BLM officials in Idaho told us that they still face challenges in meeting their workload demands, particularly because changes in the oversight process since 1996 have required additional time and effort to implement. For example, BLM officials told us that it typically takes 5 years to complete an EIS for a proposed new mine, and this process can incur contractor costs of over $2 million. In contrast, BLM could complete the environmental assessment process it used previously in 2 years using agency resources. BLM and the Forest Service have revised their land-use plans to address contamination. BLM began the process of revising its land- use plan for the area covering the Idaho portion of the Western Phosphate Field in 2003, and in 2010 issued a final EIS for the draft plan that provides direction for managing phosphate activities, preventing contamination, and setting standards for contaminants. BLM officials told us that these changes are intended to help ensure that phosphate mine operators provide adequate financial assurances and that operators adequately reclaim mine sites, including preventing selenium contamination. BLM officials told us that they expect to approve the record of decision for the plan in 2012. Similarly, the Forest Service issued a revised forest plan for the Caribou National Forest in 2003 that set new standards and guidelines for phosphate mine development on forest lands to help detect and prevent selenium contamination. For example, the revised plan contains new standards that, among other things, stipulate that vegetation used in reclamation must be monitored for bioaccumulation of hazardous substances, such as selenium, and that financial assurances should be based on the estimated full cost of reclamation and in place before the mine operator disturbs the land surface. We identified four gaps in the agencies’ oversight efforts that could limit their ability to address ongoing problems with selenium contamination. First, although BLM has strengthened its oversight of new phosphate mines by requiring that operators provide financial assurances to cover the estimated full cost of reclamation, BLM has not documented this practice in its official agency policy. In a 2002 internal evaluation of financial assurance policies, BLM recognized that its practices for phosphate mines in Idaho are not reflected in current policy and determined that the agency should revise the manual associated with this program to recognize these practices. However, BLM officials told us that the agency has not yet done so. As noted in the Standards for Internal Control in the Federal Government, agency policies should be clearly documented and readily available for examination to ensure effective program management. Without documenting its bonding practices in official agency policy, BLM cannot be assured that the current full-cost financial assurance practices for phosphate mines in Idaho will be implemented completely and consistently. Second, according to Forest Service officials, since at least 2006, BLM has not consistently coordinated with the Forest Service about the financial assurances for phosphate mining operations on National Forest System land. BLM must consult with the agency that administers the surface land prior to issuing a lease and, generally, regarding the surface protection and reclamation requirements of the lease. Forest Service officials told us that they are not consistently consulted about the appropriate level of financial assurances nor made aware of the decisions regarding financial assurances being made by BLM. BLM and the Forest Service have an interagency agreement that includes procedures for coordinating on issues involving licenses, permits, and leases, but this agreement does not expressly discuss issues related to financial assurances. Similarly, BLM and the Forest Service have drafted an agreement covering the sharing of staff and resources under their Service First initiative in Idaho, but this draft agreement does not provide details on the steps the agencies should take to coordinate on financial assurances. The resulting inconsistency in coordination is of particular concern to Forest Service officials, because they consider financial assurance amounts, particularly for existing mines, to be potentially inadequate to cover the estimated reclamation costs. These officials told us that additional communication and coordination with BLM when establishing and reviewing the adequacy of financial assurances would allow them to offer relevant information that might help BLM officials in setting bond amounts. They also noted that additional coordination would help ensure that the mine operator is acting in accordance with the portions of the forest plan specifying that financial assurances should be adequate to cover the full cost of reclamation and should be in place before surface disturbance occurs. BLM officials told us that while they do coordinate with the Forest Service, it tends to be on a case-by-case basis and that in some instances they are limited in their ability to coordinate by insufficient staff. Third, BLM and the Forest Service have not in all cases been able to reach agreement on lease terms and conditions to include when issuing new leases and readjusting existing leases. BLM and the Forest Service have an interagency agreement that states the agencies will coordinate at the local level on issues involving lease terms and conditions, and at the headquarters level on issues involving agency-wide lease terms and conditions. However, the agencies do not have a detailed process for doing so in a timely manner. For example, beginning in 2010, BLM and the Forest Service discussed potential changes proposed by the Forest Service to the terms and conditions in three existing phosphate leases. According to BLM and Forest Service officials, although BLM made some of the changes the Forest Service was seeking, most of the substantive changes being proposed by the Forest Service necessitated coordination with BLM’s Washington, D.C., headquarters because they would require changes to the standard leasing forms used by BLM. Subsequently, in December 2011, the Forest Service proposed several changes to BLM’s general lease terms and conditions to BLM’s Washington, D.C., headquarters office that, in the Forest Service’s view, would better protect the government from potential liability associated with selenium contamination in the future, particularly in light of the lawsuit noted earlier. However, as of April 2012, the agencies had not yet reached agreement on whether or how to change the general lease terms and conditions. During this period, BLM has renewed three phosphate leases for another 20 years without including the changes the Forest Service was seeking, in part to meet the deadline for renewing the leases. In commenting on a draft of this report, Interior noted that an additional reason the leases were renewed without including the Forest Service’s proposed changes was a difference in professional judgment between officials of the two agencies. Without a timely process for resolving disagreements on the part of BLM and the Forest Service regarding lease terms and conditions, we are concerned that BLM may again readjust leases or issue new leases in the future without having resolved disagreements that may exist between the agencies about proposed lease terms and conditions. For Forest Service officials, this is of particular concern because 16 leases on Forest Service land are scheduled for readjustment in the next 5 years, and once a lease is readjusted, as noted earlier, its provisions are in effect for 20 years. In commenting on a draft of this report, both Interior and the Forest Service told us they have begun working to improve the coordination process. Fourth, BLM does not have mechanisms in place for overseeing all activities that are being conducted by third-party contractors, and the agency could not identify the statutory or regulatory provisions that authorize or lay out its responsibilities with regard to directing and overseeing such arrangements. In two instances, BLM has directed mine operators to enter into and pay for contracts with third parties to provide monitoring and other services associated with the construction and installation of waste-pile cover systems and other related reclamation activities on mine sites.However, in neither instance does BLM have a written agreement with the mine operators to outline expectations for the monitoring contracts or clearly define the roles and responsibilities of the various parties. BLM officials told us that even without such agreements, they believe that they have adequate controls in place and can take enforcement actions over the work being done. For example, officials noted that BLM, rather than the mine operator, selected the contractor for one mine and will do so for the second. In addition, they noted that one of the contracts between the operator and the third party is based on a statement of work BLM wrote and that this contract repeatedly states that work is being done for BLM and at BLM’s direction. Further, BLM officials told us that even without such written agreements, under the applicable regulations the agency can issue enforcement orders to compel the mine operator’s compliance with established requirements, including those contained in the records of decision approving the mine plans. Nevertheless, we are concerned that without written agreements with mine operators, BLM’s ability to ensure that the work is carried out to its satisfaction is unclear. For example, while BLM officials cited the agency’s ability to issue enforcement orders as a useful control mechanism, agency officials have also noted limitations with this process. Specifically, they told us that an operator may take many months to comply with an enforcement order and that BLM lacks the authority to issue fines to, or impose fees on, phosphate mine operators for failing to comply with an enforcement order. Moreover, we have broader concerns because BLM could not identify the statutory or regulatory provisions that specifically authorize or lay out its responsibility with regard to the contractual arrangements the agency has required mine operators to enter into. Since selenium contamination was discovered in 1996, federal agencies and phosphate mine operators in Idaho have largely taken actions focused on assessing the extent of selenium contamination at the 16 mines where such contamination has been identified, and have conducted limited remediation. The federal agencies reported having spent about $19 million on this effort, about half of which has been reimbursed by mine operators. The mine operators have incurred additional costs for assessment and remediation activities, according to agency officials, but the operators did not provide documentary evidence to support these claims. Future cleanup costs are unknown because the agencies have not selected final cleanup actions, although agency officials informally estimate these costs could amount to hundreds of millions of dollars. Since the discovery of selenium contamination in 1996, federal and state agencies and mine operators have worked to assess the extent of the contamination caused by phosphate mining, and have conducted some limited remediation. Agency officials described a number of factors they believe contributed to the amount of time spent on these efforts, including a shift in their cleanup approach after nearly 10 years of activity. According to federal and state officials, in 1997 the agencies and mine operators formed a voluntary working group led by the mine operators to collaborate on efforts to investigate the selenium contamination discovered at that time. As part of this and other parallel efforts, mine operators and the agencies, including the Forest Service and the U.S. Geological Survey, spent 4 to 5 years collecting water-quality and other data and publishing reports that helped quantify the scope of the selenium problem, according to these officials. These data-gathering efforts indicated that high concentrations of selenium were widespread throughout the area. According to Forest Service and state officials, given the broad scope of the problem and the potential risks posed by the contamination identified by these early efforts, federal and state agencies decided it would be beneficial to move from a largely voluntary effort primarily paid for by mine operators to one in which the agencies formally coordinated their actions under federal and state authorities. As a result, in 2000 the six agencies with authority over cleanup efforts— BLM, FWS, BIA, the Forest Service, EPA, and IDEQ—and the Shoshone-Bannock Tribes signed a memorandum of understanding that provided a framework for coordinating their investigations of, and responses to, the contamination.agencies would conduct (1) an areawide investigation to continue the work the mine operators and agencies had initiated through the working group and (2) site-specific investigations to address contamination sources at individual mines. The memorandum identified a two-pronged approach: The areawide investigation began in 2001 and was led by IDEQ. The agencies’ costs for this investigation were to be reimbursed by the operators in accordance with the terms of a settlement agreement. The investigation included gathering available data and identifying data gaps, conducting water and other environmental sampling, completing risk assessments that identified contaminant sources and ways in which humans and wildlife could be exposed, and developing guidance based on these risk assessments for remediation goals to potentially be used in the site-specific efforts. Sampling efforts showed selenium levels above state water quality standards; as a result, IDEQ listed more than 150 miles of streams flowing near and through the mines as impaired under the Clean Water Act. According to a senior IDEQ official, water quality monitoring work under the areawide investigation continues, although the settlement agreement for the investigation expired in 2011. Site-specific investigations began in 1998 and as of March 2012, assessment activities were continuing, according to Forest Service and EPA officials. Officials told us that the agencies originally decided to conduct this assessment work under CERCLA’s non-time critical removal process, during which a site investigation and engineering evaluation/cost analysis is conducted before a removal action is selected and implemented. According to EPA officials, this process is ideally suited for isolated contamination sources that have proven remedies. Forest Service and EPA officials told us that the agencies chose this route because the officials responsible at the time believed it would be the quickest way to control and abate immediate threats posed by the contamination within waste rock dumps. From 1998 to 2004, the agencies and mine operators entered into non-time critical removal process settlement agreements at six mines, with final engineering evaluation/cost analysis reports issued for two of these mines in 2006 and 2011. According to EPA and Forest Service officials, however, in 2006—after nearly 10 years of pursuing actions under the non-time critical removal process—the agencies decided to switch their approach and address the contamination issues at mine sites under the longer-term remedial action process, resulting in further clean-up delays as additional site-specific data were collected. In explaining the switch, Forest Service and EPA officials told us that information generated from the areawide and site- specific investigations conducted prior to 2006 indicated that the contamination issues at the mines were more complex and widespread than originally suspected and that many mines would likely require long- term water treatment of a kind not typically implemented as part of a non- time critical removal action. As a result, according to EPA and Forest Service officials, the approach offered by CERCLA’s remedial action process would allow a more comprehensive investigation and evaluation of the mines and remediation that would fully address long-term threats posed by selenium. Forest Service officials told us that at three of the six mines where non-time critical settlement agreements had been reached, the Forest Service decided to continue with the non-time critical removal process to address contamination at waste rock dumps, while also negotiating settlement agreements with the mine operators to address other contamination at these sites through the long-term remedial action process. At the remaining three mines that were undergoing assessment under the non-time critical removal process, EPA began negotiating settlement agreements for remedial investigations and feasibility studies rather than continuing with the non-time critical removal process. As of March 2012, mine operators and agencies had begun work on the first step of remedial action process—conducting remedial investigations—at 7 of the 16 mines known to have selenium contamination, including 5 of the 6 mines that were being addressed under the non-time critical removal process. The agencies and mine operators are still in the early stages of this process—none had produced a complete remedial investigation report as of March 2012—and, according to officials, completing this process at all mines will likely require years of additional work before final cleanup remedies are selected. For the remaining 9 of the 16 contaminated mines, a senior Forest Service official told us officials are negotiating settlement agreements at 3 mines but have not initiated the remedial action process at the 6 others, either because the agencies have not had sufficient resources to begin negotiations, the agencies have not reached settlements with the mine operators, or (for two mines) the agencies are addressing the contamination under provisions of the Clean Water Act or the terms of the reclamation plan. Table 2 shows the CERCLA activities that have occurred at these 16 mines, as well as estimated dates for future activities. Federal agencies have also begun using CERCLA to address harm to fish and wildlife resources from contamination associated with the phosphate mines in southeastern Idaho. According to FWS officials, in 2011 the agency initiated the first step in a CERCLA process known as natural resource damage assessment, under which the agency may ultimately seek damages for harm to natural resources caused by phosphate mining and conduct natural resource restoration activities. Through this process, mine operators may work cooperatively with federal agencies to develop an assessment and implement natural resource restoration activities, or the agencies may independently develop a damage claim to be resolved through settlement or litigation. FWS officials told us they are determining whether a natural resource damage assessment is warranted and are working with other agencies in the area to determine whether they are interested in participating in the process. Since selenium contamination was discovered in 1996, federal agencies and mine operators have taken some limited cleanup actions to address highly problematic sources of contamination at three phosphate mines. Smoky Canyon mine. In 2007, to help reduce the amount of selenium leaching into a creek, a mine operator built a pipeline to divert water around a contaminated waste rock dump at the Smoky Canyon mine on Forest Service land. This action was the culmination of a 2003 CERCLA settlement agreement under the non-time critical removal process. According to Forest Service officials, this waste rock dump— overburden placed into the bottom of a valley—was the source of one of the highest concentrations of selenium in the area, and the diversion appears to have substantially reduced the amount of selenium coming out of the waste dump. Officials from EPA and IDEQ, however, have voiced concerns with the effectiveness of the diversion, especially after large amounts of precipitation in 2011 caused the system to overflow, allowing water to once again flow through the waste rock and carry high amounts of selenium into the creek. The mine, including this waste rock dump, is being assessed as part of an ongoing remedial investigation and additional cleanup measures to address the waste rock dump are expected to emerge from the investigation. North Maybe mine. As part of assessment work performed under the non-time critical removal process, one mine operator identified elevated selenium levels in ponds at the bottom of a large waste rock dump at the North Maybe mine. These selenium levels posed a threat to an adjacent creek if a large amount of precipitation should cause the ponds to overflow into the creek. To mitigate this threat, in 2008 the Forest Service approved a time-critical removal action where the mine operator excavated contaminated sediments from the ponds. The operator then disposed of the sediments by adding them to a separate waste dump at the site and covering them with organic material to reduce exposure to precipitation. The Forest Service project manager reported that care was taken to ensure that this action would be consistent with potential future cleanup activities. South Rasmussen mine. After water quality monitoring revealed that a waste dump at the South Rasmussen mine was discharging selenium into a creek without a required permit, EPA took an enforcement action against the mine operator under the Clean Water Act. The mine operator agreed to pay $1.4 million in fines and, according to an EPA official, has begun to address the discharge by capturing the outflow from the dump and storing it temporarily in ponds. According to EPA officials, the mine operator will need to follow up with a more permanent solution to address the contamination from the mine, but the officials have not determined the most appropriate approach for doing so. EPA officials told us they are currently determining whether other active mines in the area may also potentially be violating the Clean Water Act, but said that their efforts have been limited because EPA has not had sufficient staff or funding. EPA officials told us they may also initiate additional enforcement actions under the Clean Water Act at inactive mines if they determine that no progress is being made under CERCLA. At a fourth mine, BLM officials told us they are working with a mine operator to arrange for additional mitigation measures to stem an ongoing selenium discharge, although such measures have not yet been taken. Specifically, BLM officials told us they are working with the operator to incorporate additional mitigation measures into a reclamation plan being implemented at a portion of the Rasmussen Ridge mine that has already been mined. BLM officials told us that they are able to take this somewhat unusual approach because other portions of the mine are still active and the operator has equipment on site that could be used to implement these mitigation measures, and undertaking these efforts now would help reduce costs for the operator. These officials also noted that the mine operator has an incentive to agree to this approach because the operator has other mine plan approvals pending with the agency and cooperating would help demonstrate willingness to address contamination resulting from this operator’s mining activities. Since 1996, mine operators have also conducted mitigation work, including testing remediation methods, outside the purview of CERCLA settlement agreements, according to operator representatives with whom we spoke. For example, these representatives told us that they took steps to restrict livestock and wildlife access to ponds and other water at their mines. In addition, according to one mine operator, the operators have supported research projects related to selenium contamination, including one that involved applying cheese whey and iron granules to contaminated soils to test whether the materials could prevent plant uptake of selenium, making it biologically unavailable.from this operator told us that some of these projects were successful, A representative although further research is needed before these techniques can be applied on a large scale. Further, EPA officials also noted that mine operators have taken action to reduce the uptake of selenium in reclamation vegetation and better control stormwater runoff. Agency officials told us that five factors have contributed to the length of time spent conducting assessments at contaminated phosphate mines in southeastern Idaho. First, the phosphate mines present complicated, large- scale contamination challenges that are unique to the area, according to agency officials. While the agencies have experience dealing with other contaminated, large-scale mines, especially hardrock mines, officials told us that the phosphate mines in southeastern Idaho feature a complex interaction of selenium in the waste rock with the surrounding surface water and groundwater. EPA officials explained that complex systems such as those found at the phosphate mines require considerable time to understand, which is why the agencies have spent—and will likely continue to spend—years assessing contamination at these mines. Second, having multiple agencies with authority over some aspect of cleanup at the mines—including different agencies acting as the lead at different sites—has slowed down the assessment process, according to officials with these agencies. For example, EPA and Forest Service officials noted that it is time-consuming to coordinate the other agencies’ involvement, including obtaining, considering, and reconciling multiple, often conflicting, opinions. EPA officials told us that technical disagreements among agencies have led to delays in reviewing some operators’ assessment documents. Further, according to these officials, the situation is exacerbated at those mines located on private and federal land where decision-making authority is shared, requiring those agencies to come to full agreement before moving forward with decisions or actions that affect the entire site. In addition, the agencies’ roles sometimes needed clarification. For example, at one mine located on an Indian reservation, agency officials said BIA and EPA spent 2 years negotiating which agency would take the lead in managing cleanup work under CERCLA because the agencies disagreed over which agency had the legal authority to do so. Third, the decision to switch from the non-time critical removal process to the remedial action process resulted in delays. According to Forest Service and EPA officials, the agencies and mine operators spent additional time renegotiating settlement agreements at mines where they had begun to address contamination under the non-time critical removal process, delaying the process of negotiating new agreements at the mines that had not yet been addressed. These officials also told us that some of the data collected under the non-time critical removal process needed to be validated by third-party external contractors, which resulted in additional delays. Senior EPA and Forest Service officials who entered the process after the non-time critical approach was selected told us they believed the agencies could have started the CERCLA remedial action process initially, in part because of the large sizes of the mine sites and the high degree of uncertainty associated with them. EPA and Forest Service officials told us that selecting the remedial action process from the beginning may have streamlined the process of assessing contamination. Fourth, according to EPA, Forest Service, and state officials, an individual mine operator’s level of participation and cooperation influences the amount of progress that can be made at the contaminated mines, and a difficult situation with a mine operator can slow down the assessment process. According to agency officials, this has happened in several cases. For example, as noted earlier, one mine operator sued to compel the federal government to share liability for the costs of cleaning up contamination under CERCLA at four mines; Forest Service officials told us that responding to that lawsuit has taken resources and attention away from managing assessment and cleanup activities at the other mines on its land, resulting in additional delays to the assessment process. Ultimately, the difficulties associated with this situation resulted in the agency’s terminating the assessment work the mine operator had been conducting at two of the mines and undertaking the work itself. In addition, EPA officials told us they have had concerns with the quality of the draft reports the mine operators produced for review by the agencies as part of the assessment process. These officials told us the agencies provided extensive comments on the draft reports, necessitating significant rework by the mine operators. Finally, the Forest Service did not have sufficient technical and management expertise in place, or a sufficient focus on enforcement, in the early years of the assessment efforts to successfully manage those efforts under CERCLA, according to Forest Service and EPA officials. For example, according to an internal Forest Service review, the CERCLA knowledge and expertise of the Forest Service field staff were not sufficient to address the complexity of the mines, which limited cleanup progress. The review also found the agency did not hire enough technical support contractors with relevant expertise to assist with oversight. As a result, according to Forest Service officials and the review, it was difficult for the Forest Service to critically assess the mine operators’ work, and the agency did not always conduct oversight in a timely manner. In addition, according to EPA officials, the Forest Service was not aggressive in enforcing the terms of its early settlement agreements, which led to a site assessment lasting 13 years in one case. Moreover, these officials told us they believe the lack of enforcement occurred in part because the Forest Service, unlike EPA, generally does not have experience with CERCLA enforcement. In 2008—12 years after the contamination was first identified—the Forest Service recognized it needed staff with more CERCLA-specific experience to manage the cleanup work, and in 2009 it hired a former EPA official to manage its cleanup program. According to this official, the Forest Service also hired more experienced project managers for each mine and increased its use of technical support contractors to bolster its oversight of mine operators. EPA officials told us they believe the composition of the Forest Service staff is now appropriate for managing the cleanup work at the phosphate mines but that having the Forest Service manage the CERCLA cleanup work at the mines may continue to pose certain challenges. For example, EPA officials told us they believe the Forest Service lacks the institutional support for its CERCLA project managers that is available to EPA’s project managers. Federal and state agencies reported having spent about $19 million since fiscal year 2001 to oversee assessment and remediation efforts at contaminated phosphate mines in southeastern Idaho, according to our analysis of federal and state data. Of this amount, these agencies spent $10 million on general project management and assessment efforts that reached across multiple mines, including time spent planning and consulting over the cleanup process and working on the areawide investigation, with the remainder spent primarily on oversight of activities at individual mines. Agencies also reported that the mine operators, who have carried out most of the assessment and remediation efforts, have reimbursed the agencies for 44 percent of the total agency expenditures under CERCLA settlement agreements, and the rest has come from the agencies’ budgets. Figure 7 shows, by agency, expenditures paid from agency budgets and expenditures reimbursed by mine operators. As figure 7 shows, the agencies reported varying amounts of expenditures and rates of reimbursement. The Forest Service, which is managing cleanup efforts at 7 of the 16 contaminated mines, reported spending the largest amount—about $9 million—an amount that approaches the other agencies’ expenditures combined. The Forest Service also reported incurring more expenditures that were not reimbursed—about $6 million—than the other agencies, receiving reimbursement for 25 percent of its expenditures. According to Forest Service officials, the agency’s relatively low reimbursement rate occurs, in part, because the Forest Service did not have cost recovery provisions in its settlement agreements for any of its oversight work until 2006. BLM and BIA also reported low overall rates of reimbursement because, for the most part, these agencies did not have cost recovery arrangements at the mines where they were most active until settlement agreements were signed in 2008 or later. Instead, they paid for oversight at these mines in the years leading up to the settlement agreements out of their budgets. In contrast, EPA has managed cleanup at five mines and reported incurring about $3 million in expenditures, of which 80 percent has been reimbursed by mine operators. EPA officials told us that because of limited funding and staff resources, as well as other factors, EPA Region 10 (which oversees phosphate assessment and cleanup activities in Idaho) helps oversee CERCLA assessments primarily at mines where it has negotiated cost recovery mechanisms as part of settlement agreements, or expects to do so in the future. Similarly, FWS officials told us that because of the agency’s own funding constraints, they have also restricted their involvement to those mines where cost recovery is available, also resulting in a high rate of reimbursement. According to the EPA and FWS officials, this approach protects the agencies financially, but it has also limited their ability to contribute their expertise across the cleanup efforts. See appendix II for more detailed information on agency expenditures and amounts reimbursed by mine operators at each of the 16 mines known to have selenium contamination. The approximately $19 million in expenditures for agency oversight efforts does not include the cost of assessment and remediation work the mine operators have conducted, either under the terms of settlement agreements or independently, according to agency officials. For example, the operators’ costs for developing and implementing plans for water quality sampling or constructing a diversion pipeline are not included in the agencies’ expenditure total. We requested documentary evidence to support the costs incurred by the mine operators but they did not provide these documents to us. Anecdotal information suggests that the mine operators have spent a significant sum on assessment and remediation work. For example, one mine operator representative told us his company spent $12 million on assessment and remediation actions taken under settlement agreements from 2003 through 2011 at its three mines, and another mine operator reported it had spent about $10 million on cleanup- related work at four of its mines. Nevertheless, without the mine operators’ expenditure information, we cannot be assured of the accuracy of the amounts the mine operators reported spending on assessment and remediation work. According to EPA and Forest Service officials, they have not developed cost estimates for future cleanup actions at any of the 16 contaminated phosphate mines because agencies are still conducting assessment work, and these officials will not determine cleanup actions until they have completed this work. However, information from phosphate and hardrock mines provides an indication of potential future costs that are likely, and which, according to informal estimates provided by EPA officials, could total hundreds of millions of dollars, in part because several of the mines are likely to require long-term remedial actions that are typically costly to implement. According to EPA and Forest Service officials, the following two cleanup actions, if required, would significantly influence cleanup costs at phosphate mines in southeastern Idaho: Long-term water collection and treatment. According to EPA officials, the need for long-term water collection and treatment can be the most costly remedial action at a mine site, primarily because water collection and treatment can require ongoing activity for more than 100 years. Costs for this type of action include design; the initial capital investment in infrastructure for collection, storage, and treatment of the water; ongoing infrastructure upgrades and replacements; and personnel costs for continual operation. Such costs can be significant; for example, at another cleanup site in Region 10, the Holden hardrock mine in Washington State, EPA and the Forest Service have estimated the cost for long-term water treatment will be about $47 million. EPA officials told us that, based on information gathered to date, at least five phosphate mines may require long-term water treatment. Waste rock covers. According to EPA officials, the cost of consolidating and capping large volumes of waste rock materials can vary depending on the number of acres needing coverage, the type of cover required, and the topography of the area to be covered. For example, the Forest Service recently approved a cover, consisting of a clay layer, a synthetic membrane, and soil, that will be implemented as part of a non-time-critical removal action at one phosphate mine and is estimated to cost about $17 million to cover roughly 100 acres of a cross-valley fill. A synthetic cover recently required as part of the reclamation plan at a new mine is estimated to cost about $29 million to cover nearly 400 acres.term monitoring and maintenance can further increase the costs of these covers. According to a Forest Service official, long- Based on our review, several other factors add to the uncertainty about the level of cleanup that will be required in the area, and the amount and allocation of cleanup costs. The selenium water quality standard is expected to change. According to EPA officials, final remedial actions under CERCLA will be based, in part, on the state’s water quality standard for selenium in rivers and streams. This standard is based on a national recommendation issued by EPA, which is in the process of updating its recommendation to better protect fish and other aquatic organisms. After the new recommendation is issued, according to EPA officials, Idaho will likely adopt a new state standard, using the recommendation as a basis. If the new standard is more stringent than the current standard, the level of cleanup required may change as well, increasing the costs associated with cleanup. A total maximum daily load for selenium has not yet been established. Because the streams in the Blackfoot River watershed—the main watershed affected by selenium contamination in Idaho—have been listed as impaired for selenium under section 303(d) of the Clean Water Act, the state is required to establish a TMDL for selenium for those waters. According to IDEQ and EPA officials, the state has delayed developing the TMDL, in part because the ongoing CERCLA assessment process is yielding valuable information about the sources of selenium in the watershed and how these sources interact with one another—and this information will be critical to helping the state establish a TMDL for selenium. Once a TMDL is established, it may inform pollution limits that are established for the mines that directly discharge selenium to the watershed. According to EPA officials, a TMDL would also likely help provide a road map for handling the selenium contamination at the remaining mines where CERCLA actions have not yet been initiated. The government’s share of future cleanup costs is not yet determined. One outcome of the lawsuit filed against the government by a mine operator under CERCLA is that according to the court decision, the government is potentially liable for costs associated with environmental contamination at the four mine sites at issue in the litigation. However, the court has not determined the government’s share of the cleanup costs. As of April 2012 the government and the mine operator were negotiating a proposed settlement regarding allocation of past and future costs that will be final once approved by the court. Because of the court’s decision holding the government potentially liable, agency officials told us other mine operators may also seek to have the government share cleanup costs with them. If they are successful, the agency officials said the government’s costs could ultimately be significant. Federal agencies reported holding financial assurances for phosphate mine operations in southeastern Idaho for about $91 million to cover (1) mine reclamation and related activities and (2) site assessment and limited remediation activities negotiated under CERCLA settlement agreements. Specifically, the agencies reported holding financial assurances valued at approximately $80 million to cover mine reclamation and related activities and $11.4 million to cover site assessment and limited remediation activities negotiated under CERCLA settlement agreements. About $4.5 million of this amount was in the form of corporate guarantees, a type of financial assurance that both BLM and EPA have stated is potentially risky because corporate guarantees are not covered by a specific financial asset. The agencies have not entered into settlement agreements or established financial assurances to cover future cleanup costs because, as described in the prior section, they have not determined the actions that will be needed or the associated costs. As of March 2012, BLM reported holding about $75.2 million in reclamation financial assurances for 13 of the 18 phosphate mines where federal agencies are overseeing mining operations or cleanup activities.(The five mines without BLM financial assurances are all inactive and are being assessed for cleanup under CERCLA.) Over 95 percent of this amount—almost $72 million—is associated with the five currently active mines, and nearly all of that amount—over $66 million—is associated with the two most recently approved active mines, the Blackfoot Bridge and Smoky Canyon mines. BLM also reported holding an additional $127,200 in financial assurances for eight leases where the operator is engaging in exploratory activities but where mine operations have not yet commenced and other unmined sites. Figure 8 shows the amount and composition of BLM-held reclamation financial assurances for phosphate mines in Idaho, including the amounts associated with each of the five active mines. As of March 2012, there were 6 mines undergoing CERCLA assessment for which the Forest Service and EPA reported holding financial assurances valued at $11.4 million. Such assurances are intended to help ensure mine operators’ performance under CERCLA settlement agreements. For one of these mines, the Forest Service holds a financial assurance, which is valued at about $3.9 million and covers both a CERCLA assessment and previous remediation work at that mine to construct a water diversion pipeline, according to agency officials. For the other five mine sites, EPA holds $7.5 million in financial assurances to cover CERCLA assessments, according to agency officials. About $4.5 million of the $7.5 million in financial assurances that EPA holds for three of the five mines is in the form of corporate guarantees. Corporate guarantees are promises by mine operators, sometimes accompanied by a test of financial stability, to pay remediation costs, but these guarantees do not require that funds be set aside by the operators to pay such costs. As a result, for these three sites, EPA does not hold a financial asset that it could use to pay for the work specified in the settlement agreement should the operator fail to do so. EPA officials noted, however, that these guarantees cover only the investigation and planning stage of the process, and that the operator at these mines has already successfully completed a significant portion of the activities under an earlier removal settlement agreement. Nevertheless, EPA Region 10 has acknowledged the risk associated with corporate guarantees. In its 2009 Region 10 Mining Financial Assurance Strategy, the region noted that the form of a financial assurance is as important as the amount and stated that corporate guarantees are not a secure mechanism should a company go bankrupt or have financial difficulties. As an example, the region cited a corporate guarantee that it had accepted from an operator of a mine smelter site in Washington State. When EPA requested a more secure type of financial assurance, the operator filed for bankruptcy, leaving the federal government with additional responsibility for the cleanup costs at that site. Recognizing the inherent risks associated with corporate guarantees, the region stated in its strategy that it would no longer accept them as part of CERCLA consent decrees or settlement agreements related to cleanup actions for mining operations. Such concerns about corporate guarantees have been raised previously by others. In 2000, for example, BLM stopped accepting corporate guarantees for new mining operations, stating that they are less secure than other forms of financial assurance, particularly in light of fluctuating commodity prices and the potential for an operator to declare bankruptcy. Moreover, as we reported in August 2005, EPA has stated that corporate guarantees offer EPA minimal long-term assurance that a company with environmental liability will be able to fulfill its financial obligations. As a result, EPA and taxpayers may be exposed to significant financial risk, especially at mining sites where one or a few parties are liable for cleanups—as is the case for phosphate mining in Idaho. We also noted in our August 2005 report that EPA’s selection of a reliable financial assurance mechanism is particularly important given the potential for large liabilities stemming from mining sites. EPA does not have regulations on the use of corporate guarantees as financial assurances under CERCLA, however. EPA is considering promulgating regulations related to financial assurances for mining and other industries and has solicited public comments on the risks associated with corporate guarantees and the experiences of regulators who have attempted to use them. EPA expects to publish a proposed rule outlining its approach to financial assurances later in 2013, according to EPA officials. Selenium contamination from phosphate mining has been a concern in southeastern Idaho for over 15 years. Federal agencies have taken steps to strengthen their oversight of phosphate mining on federal land since selenium contamination was discovered in 1996 by requiring more detailed environmental assessments and reclamation plans, requiring financial assurances that provide more coverage, and hiring additional staff. However, addressing the contamination has been a lengthy undertaking with many factors contributing to the length of this process, including the complexity and scale of the sites, sometimes-difficult relations with mine operators, an initial lack of expertise and resources on the part of the Forest Service, and the decision to switch to a more comprehensive cleanup approach. Nevertheless, the fact remains that after years of study and millions of dollars spent, the agencies and mine operators are still years away from fully understanding the extent of contamination in the area and many more years away from completing actual mine cleanup. The agencies have taken important steps aimed at preventing future contamination, including BLM’s use of more rigorous oversight procedures when considering or approving new mines, but gaps in agency policies and coordination may result in missed opportunities for the agencies to fully implement the approaches they have developed. For example, while BLM’s practice of setting financial assurances to cover the estimated full cost of reclamation for new phosphate mines may better protect the government from future cleanup liability, the agency has not documented this practice in official agency policy—lessening the certainty that the practice will be consistently followed in the future. Likewise, the lack of established coordination practices between BLM and the Forest Service may result in cases where BLM may not give full or timely consideration to the Forest Service’s input when establishing mine lease terms and conditions or setting financial assurance amounts for mines in southeastern Idaho. As a result, BLM in some cases may be basing its decisions on incomplete information. Additionally, while BLM has attempted to leverage its limited resources by requiring mine operators to pay for contractors to help oversee reclamation work, it does not have mechanisms in place to fully oversee such activities and could not identify its authorities for directing and overseeing such arrangements. And finally, EPA’s acceptance of financial assurances in the form of corporate guarantees related to assessment (and, potentially, cleanup) activities leaves the federal government at increased risk of shouldering more of the financial burden for these tasks should the mine operators fail to carry them out or declare bankruptcy. In its current efforts to draft regulations for financial assurances under CERCLA, EPA has stated that it plans to assess the risks associated with different forms of financial assurances, including corporate guarantees, and the experiences of regulators to assess the adequacy of various financial mechanisms—which we believe is an important step in ensuring that the financial assurances accepted by the federal government are adequately reducing the government’s exposure for cleanup costs. To ensure effective oversight of phosphate-mining operations and reclamation and cleanup, we are making three recommendations to the Secretary of the Interior and one to the Administrator of EPA. Specifically, we recommend that the Secretary of the Interior direct the Director of BLM to document the practice of requiring financial assurances to cover the estimated full cost of reclamation in BLM’s official agency policy; work with the Chief of the Forest Service to develop a coordinated process for (1) proposing and evaluating lease terms and conditions for phosphate mines in southeastern Idaho, and (2) sharing information on the amount and adequacy of financial assurances to provide better coordination between federal agencies regarding phosphate mine oversight; and analyze BLM’s authorities for directing operators to enter into third- party contracting mechanisms. If BLM confirms that it has the authority, it should develop a policy document to ensure consistent implementation, including a requirement that BLM reach written agreement with operators regarding arrangements for third-party contracting. Should BLM determine that it does not have the authority to use such mechanisms—and should it wish to continue the practice—it should seek appropriate legislation for doing so. In addition, we recommend that the Administrator of EPA ensure that the agency complete its plan to assess whether corporate guarantees are an adequate financial mechanism, including giving due consideration to the experience of EPA Region 10 and BLM in using such assurances. If EPA determines that corporate guarantees are not an appropriate form of financial assurance, then their use should be prohibited in the financial assurance regulations that the agency expects to promulgate for the mining industry. We provided EPA and the Departments of Agriculture, Defense, and the Interior with a draft of this report for their review and comment. EPA, the Forest Service (responding on behalf of the Department of Agriculture), and Interior generally agreed with our findings and recommendations, and their written comments are reproduced in appendixes III, IV, and V respectively. Each of these agencies also provided technical comments which we incorporated as appropriate. The Department of Defense declined to provide comments. While the Department of the Interior generally agreed with our findings and recommendations, it expressed concern that our discussion on BLM’s coordination with the Forest Service on leasing activities could be misleading. Interior noted that in some instances BLM does not accept the Forest Service’s recommended changes to existing phosphate leases because of differences in professional judgment, not because of a lack of coordination. Furthermore, Interior noted that BLM and the Forest Service have been discussing the Forest Service’s proposed revisions to the standard lease terms and conditions for new leases to further protect the government from potential liability associated with selenium contamination, but that such discussions are necessarily detailed and time consuming, and the lack of agreement to date does not constitute a lack of coordination. In this context, the Forest Service also noted that it places great value on its collaborative relationship with BLM, and is committed to working with BLM to improve coordination and information sharing. We have made changes to the report to provide additional context and clarification regarding the agencies’ coordination efforts. Nevertheless, while we acknowledge that BLM and the Forest Service have begun efforts to improve their coordination on these issues, we continue to believe that they would benefit from a clearer process for coordinating in a timely manner and elevating issues to the headquarters level when necessary. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Agriculture, Defense, and the Interior; the Administrator of the Environmental Protection Agency; the Chief of the Forest Service; the Assistant Secretary for Indian Affairs; the Directors of the Bureau of Land Management and Fish and Wildlife Service; appropriate congressional committees; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or mittala@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. This appendix details the methods we used to examine the issues surrounding the oversight and cleanup of phosphate mines on federal lands. Specifically, we were asked to provide information on the (1) extent to which federal agencies’ oversight of phosphate operations has changed since the discovery of selenium contamination in Idaho in 1996, and whether those changes appear sufficient to help the agencies prevent future contamination; (2) actions that federal agencies and mine operators have taken to assess and remediate contamination from phosphate mining on federal land, amounts they have spent on these actions, and estimated remaining costs; and (3) types and amounts of financial assurances in place for phosphate mining operations and the extent to which these assurances are likely to cover future cleanup costs. For all objectives, we focused our report on agencies’ and mine operators’ activities in Idaho for two primary reasons. First, phosphate- mining operations on federal land are generally limited to the Western Phosphate Field, and all but one of these operations are located in Idaho. Second, the occurrence of selenium contamination resulting from phosphate-mining operations on federal lands is currently limited to Idaho; such contamination has not been discovered in neighboring states containing portions of the Western Phosphate Field. To address the first objective, we reviewed federal laws and regulations relevant to the federal agencies’ oversight of phosphate-mining operations on federal land in Idaho, including the National Environmental Policy Act (NEPA), the Mineral Leasing Act, the Clean Water Act, and the Endangered Species Act. In addition, we reviewed relevant agency documents and reports created both before and after 1996. These include Bureau of Land Management (BLM) and Forest Service land-use plans; BLM records of decision for new mine plans, and associated NEPA documents; BLM instructional memorandums; BLM lease and bond abstracts; and correspondence between BLM and the Forest Service regarding lease stipulations. We interviewed officials with the Bureau of Indian Affairs (BIA), BLM, the Fish and Wildlife Service (FWS), and the Office of Natural Resources Revenue within Interior; the Forest Service; the Environmental Protection Agency (EPA); the Army Corps of Engineers; and the Idaho Department of Environmental Quality (IDEQ). We also interviewed representatives of the three Idaho phosphate mine operators and visited the three phosphate mines operating as of June 2011, and twelve of the sixteen mines where selenium contamination has been detected. To obtain additional perspectives beyond those offered by agency officials and mine operators, we also interviewed representatives from the Shoshone-Bannock Tribes, on whose reservation one of the largest phosphate mines is located, and from regionally-focused environmental advocacy groups, including the Idaho Conservation League and the Greater Yellowstone Coalition. To address the second objective, we interviewed officials from BIA, BLM, FWS, the Forest Service, EPA, IDEQ, the mine operators, and the Shoshone-Bannock Tribes, and reviewed documents and reports on the status of assessment and cleanup efforts and related settlement agreements under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA). To determine the amount federal and state agencies have spent on these actions and the amount each agency received in reimbursement from mine operators, we obtained expenditure and collections data from each agency where available from fiscal year 2001 through fiscal year 2011. Specifically, for EPA, we collected data from EPA’s Superfund Cost Recovery Package Imaging and On-Line System, which included EPA’s expenditures at each mine, as well as information on funds received from mine operators. For BLM, we received data from BLM’s Management Information System for fiscal years 2001–2008 and from Interior’s Financial and Business Management System for fiscal years 2009–2011. Because BLM’s data from these systems applied to cleanup work at Idaho phosphate mines generally, we also collected mine-specific expenditure information where available, including from the cost documentation packages that BLM submitted to the mine operators for six mines as part of settlement agreements to which BLM was a party. We received information on funds BLM received from mine operators from Interior’s Federal Financial System. For the Forest Service, we received data from the Forest Service’s Foundation Financial Information System, which included the Forest Service’s expenditures at each mine and funds received from mine operators. For FWS, we collected data from cost documentation packages submitted to the mine operators for five mines and the areawide investigation where FWS was a party to a settlement agreement, and from the Federal Financial System for additional expenditures as well as funds received from mine operators. For BIA, officials estimated their annual expenditures based on records kept internally showing hours worked on cleanup at phosphate mines in Idaho. For IDEQ, we received data from the department’s General Online Reporting System, which included IDEQ’s costs as well as funds received from mine operators. To evaluate the reliability of these data and determine their limitations, we reviewed the data obtained from each agency’s system as well as the cost documentation packages generated by the agencies and sent to mine operators. For each of these data sources, we analyzed related documentation, examined the data to identify obvious errors or inconsistencies, and compared the data we received with other published data sources, where possible. We also interviewed officials from each agency to obtain information on the internal controls of their data systems. On the basis of our evaluation of these sources, we concluded that the expenditure data we collected and analyzed were sufficiently reliable for our purposes. For all agencies, at least some of the expenditures they reported included expenses paid to cover indirect costs associated with work performed by the agencies, which is in accordance with the terms of many of the settlement agreements. However, these indirect costs were not included in all of the expenditure data shared with us. Therefore, in order to report similar types of expenditures across agencies, we applied agency-specific historic annual indirect cost rates to those expenditures we received where it was not already included. In order to determine costs in constant 2012 dollars, we adjusted the amounts reported to us for inflation by applying the fiscal year chain-weighted gross domestic product price index, with fiscal year 2012 as the base year. To determine estimated remaining costs for future cleanup actions at the sites, we interviewed EPA, Forest Service, and BLM officials, and reviewed reports from phosphate mines where CERCLA removal actions have occurred or have been approved and mines with recently approved reclamation plans that include measures to prevent selenium contamination. EPA and Forest Service officials provided information regarding likely cost drivers for cleanup at phosphate mines, and, to provide context, EPA officials identified hardrock mines in the region with similar general characteristics where these cost drivers are expected to be applied. To address the third objective, we first reviewed BLM, Forest Service, and EPA regulations; BLM and Forest Service manuals; and BLM memorandums to obtain agency financial assurance standards and procedures. We then obtained financial assurance data from records maintained by Idaho-based officials with BLM, the Forest Service, and EPA, which included data on bonds held by Idaho state agencies for operations on federal land. We also interviewed officials from these agencies to obtain insights into agency financial assurance practices and the extent to which current financial assurances are sufficient to cover future cleanup actions. We evaluated the reliability of BLM financial assurance data by interviewing BLM officials and corroborating the data maintained by BLM officials in Idaho with data maintained in BLM’s centralized database, known as LR2000. We evaluated the reliability of Forest Service and EPA data by interviewing agency officials, examining agency records, and cross-checking these data to the bonds amounts listed in CERCLA settlement agreements. We determined that the financial assurance data from BLM, the Forest Service, and EPA were sufficiently reliable for the purpose of determining the types and amounts of financial assurances in place for phosphate mining operations in Idaho. We conducted this performance audit from May 2011 through April 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides information on the acres disturbed, CERCLA lead agency, and surface land ownership at 18 phosphate mines in southeastern Idaho (table 3), and the agency assessment and cleanup expenditures at the 16 of those mines with selenium contamination (table 4). In addition to the contact named above, Steve Gaty (Assistant Director), Andrea Wamstad Brown, Casey L. Brown, Antoinette Capaccio, Leslie K. Pollock, Rebecca Shea, Carol Herrnstadt Shulman, and Rajneesh Verma made key contributions to this report.
For over 100 years in the United States, phosphate has been mined on federal land primarily for use in fertilizer and herbicides. The Department of the Interior’s Bureau of Land Management (BLM) is responsible for leasing and overseeing such mines on federal land. In 1996, selenium contamination from phosphate mines was discovered in Idaho, threatening the health of livestock and wildlife. Mines in the area are now being assessed for cleanup under the Environmental Protection Agency’s (EPA) Superfund program. Agencies may require mine operators to post financial assurances, which are usually in the form of a bond, to ensure they meet their leasing and cleanup obligations. GAO was asked to determine the (1) extent to which federal oversight for phosphate operations has changed since 1996; (2) actions federal agencies and mine operators have taken to address contamination, amounts spent to date, and estimated remaining costs; and (3) types and amounts of financial assurances in place for phosphate-mining operations. GAO reviewed agency data and documents, and interviewed key agency and mine operator officials. Since 1996, federal agencies have taken several actions to strengthen their oversight of phosphate mining on federal land. For example, BLM now conducts more detailed environmental analysis when evaluating new mine plans; requires phosphate mine operators to provide more comprehensive plans for reclaiming mine sites (restoring the land to a stable condition that can support other uses); and requires the mine operators to provide financial assurances that are based on the full estimated cost of reclaiming mines, in contrast to BLM’s previous practice of calculating financial assurances based simply on the acreage associated with mines. However, gaps remain in agency policies and coordination that could limit the agencies’ efforts to address contamination from phosphate-mining operations. For example, BLM has not documented its new full-cost financial assurance practice in agency policy and therefore has limited assurance that it will be implemented consistently. BLM also has not fully coordinated with the Forest Service when establishing mine lease conditions and setting financial assurance amounts. Limited coordination is of particular concern because 16 phosphate leases in Idaho are scheduled for review and possible readjustment in the next 5 years, and once a lease is readjusted, its provisions are in effect for 20 years. Over the last 16 years, federal agencies and mine operators have primarily focused on assessing the extent of selenium contamination in Idaho and have conducted only limited remediation actions. The agencies have conducted or overseen high-level assessments of contamination at 16 of the 18 mines where federal agencies are overseeing mining operations or cleanup activities, and at several of these mines the agencies and mine operators are now conducting more detailed assessments, known as remedial investigations and feasibility studies. However, no final cleanup actions have been chosen at any of the sites, and according to officials, most sites will require years of additional investigative work before final cleanup actions are selected. Federal agencies reported that they have spent about $19 million since 2001 to oversee these assessments and undertake a limited number of remediation actions, roughly half of which has been reimbursed by the mine operators under cleanup settlement agreements. Mine operators told GAO that they too have spent millions of dollars in additional assessment and remediation work but did not provide documentary evidence to support these claims. Agency officials told GAO that they have not developed estimates for the remaining cleanup costs because final cleanup remedies have not yet been identified. However, their informal estimates suggest that remaining cleanup costs may total hundreds of millions of dollars for the contamination from mining in Idaho. Federal agencies reported holding about $80 million in financial assurances for reclaiming phosphate mines in Idaho. Most of this amount—over $66 million—is associated with the two most recently approved phosphate mines. Agencies reported holding an additional $11.5 million in financial assurances to cover site assessment and limited cleanup activities under EPA’s Superfund program, but some of these are in the form of corporate guarantees, which the agencies have determined are riskier than other types of financial assurances. No financial assurances have been established to cover future cleanup costs because remaining cleanup actions have not yet been identified, according to agency officials. Among other things, GAO recommends that BLM document its financial assurance practice in policy and consult with the Forest Service to better protect the federal government from cleanup costs. In commenting on a draft of this report, Interior, the Forest Service, and EPA generally agreed with GAO’s findings and recommendations.
Overlap and fragmentation among government programs or activities can be harbingers of unnecessary duplication. Reducing or eliminating duplication, overlap, or fragmentation could potentially save billions of tax dollars annually and help agencies provide more efficient and effective services. These actions, however, will require some difficult decisions and sustained attention by the administration and Congress. Many of the issues we identified concern activities that are contained within single departments or agencies. In those cases, agency officials can generally achieve cost savings or other benefits by implementing existing GAO recommendations or by undertaking new actions suggested in our March report. However, a number of issues we have identified span multiple organizations and therefore may require higher-level attention by the executive branch, enhanced congressional oversight, or legislative action. A few examples from our March report follow. Teacher quality programs: In fiscal year 2009, the federal government spent over $4 billion specifically to improve the quality of our nation’s 3 million teachers through numerous programs across the government. Federal efforts to improve teacher quality have led to the creation and expansion of a variety of programs across the federal government, however, there is no governmentwide strategy to minimize fragmentation, overlap, or duplication among these many programs. Specifically, we identified 82 distinct programs designed to help improve teacher quality, either as a primary purpose or as an allowable activity, administered across 10 federal agencies. The proliferation of programs has resulted in fragmentation that can frustrate agency efforts to administer programs in a comprehensive manner, limit the ability to determine which programs are most cost effective, and ultimately increase program costs. Department of Education (Education) officials believe that federal programs have failed to make significant progress in helping states close achievement gaps between schools serving students from different socioeconomic backgrounds, because in part, federal programs that focus on teaching and learning of specific subjects are too fragmented to help state and district officials strengthen instruction and increase student achievement in a comprehensive manner. Education has established working groups to help develop more effective collaboration across Education offices, and has reached out to other agencies to develop a framework for sharing information on some teacher quality activities, but it has noted that coordination efforts do not always prove useful and cannot fully eliminate barriers to program alignment. Congress could help eliminate some of these barriers through legislation, particularly through the pending reauthorization of the Elementary and Secondary Education Act of 1965 and other key education bills. Specifically, to minimize any wasteful fragmentation and overlap among teacher quality programs, Congress may choose either to eliminate programs that are too small to evaluate cost effectively or to combine programs serving similar target groups into a larger program. Education has proposed combining 38 programs into 11 programs in its reauthorization proposal, which could allow the agency to dedicate a higher portion of its administrative resources to monitoring programs for results and providing technical assistance. Military health system: The Department of Defense’s (DOD) Military Health System (MHS) costs have more than doubled from $19 billion in fiscal year 2001 to $49 billion in 2010 and are expected to increase to over $62 billion by 2015. The responsibilities and authorities for the MHS are distributed among several organizations within DOD with no central command authority or single entity accountable for minimizing costs and achieving efficiencies. Under the MHS’s current command structure, the Office of the Assistant Secretary of Defense for Health Affairs, the Army, the Navy, and the Air Force each has its own headquarters and associated support functions. DOD has taken limited actions to date to consolidate certain common administrative, management, and clinical functions within its MHS. To reduce duplication in its command structure and eliminate redundant processes that add to growing defense health care costs, DOD could take action to further assess alternatives for restructuring the governance structure of the military health system. In 2006, if DOD and the services had chosen to implement one of the reorganization alternatives studied by a DOD working group, a May 2006 report by the Center for Naval Analyses showed that DOD could have achieved significant savings. Our adjustment of those savings from 2005 into 2010 dollars indicates those savings could range from $281 million to $460 million annually, depending on the alternative chosen and the numbers of military, civilian, and contractor positions eliminated. The Under Secretary of Defense for Personnel and Readiness has recently established a new position to oversee DOD’s military healthcare reform efforts. Employment and training programs: In fiscal year 2009, 47 federally funded employment and training programs spent about $18 billion to provide services, such as job search and job counseling, to program participants. Most of these programs are administered by the Departments of Labor, Education, and Health and Human Services (HHS). Forty-four of the 47 programs we identified, including those with broader missions such as multipurpose block grants, overlap with at least one other program in that they provide at least one similar service to a similar population. As we reported in January 2011, nearly all 47 programs track multiple outcome measures, but only 5 programs have had an impact study completed since 2004 to assess whether outcomes resulted from the program and not some other cause. We examined potential duplication among three selected large programs—HHS’s Temporary Assistance for Needy Families (TANF) and the Department of Labor’s Employment Service, and Workforce Investment Act (WIA) Adult programs—and found they provide some of the same services to the same population through separate administrative structures. Colocating services and consolidating administrative structures may increase efficiencies and reduce costs, but implementation can be challenging. Some states have colocated TANF employment and training services in one-stop centers where Employment Service and WIA Adult services are provided. An obstacle to further progress in achieving greater administrative efficiencies is that little information is available about the strategies and results of such initiatives. In addition, little is known about the incentives that states and localities have to undertake such initiatives and whether additional incentives are needed. To facilitate further progress by states and localities in increasing administrative efficiencies in employment and training programs, we recommended in 2011 that the Secretaries of Labor and HHS work together to develop and disseminate information that could inform such efforts. As part of this effort, Labor and HHS should examine the incentives for states and localities to undertake such initiatives and, as warranted, identify options for increasing such incentives. Labor and HHS agreed they should develop and disseminate this information. HHS noted that it does not have the legal authority to mandate increased TANF-WIA coordination or create incentives for such efforts. As part of its proposed changes to the Workforce Investment Act, the Administration proposes consolidating nine programs into three. In addition, the budget proposal would transfer the Senior Community Service Employment Program from Labor to HHS. Sustained oversight by Congress could also help ensure progress is realized. Surface transportation: The Department of Transportation (DOT) currently administers scores of surface transportation programs costing over $58 billion annually. The current federal approach to surface transportation was established in 1956 to build the Interstate Highway System, but has not evolved to reflect current national priorities and concerns. Over the years, in response to changing transportation, environmental, and societal goals, federal surface transportation programs grew in number and complexity to encompass broader goals, more programs, and a variety of program approaches and grant structures. This variety of approaches and structures did not result from a specific rationale or plan, but rather an agglomeration of policies and programs established over half a century without a well-defined overall vision of the national interest and federal role in our surface transportation system. This has resulted in a fragmented approach as five DOT agencies with 6,000 employees administer over 100 separate surface transportation programs with separate funding streams for highways, transit, rail, and safety functions. This fragmented approach impedes effective decision making and limits the ability of decision makers to devise comprehensive solutions to complex challenges. A fundamental re-examination and reform of the nation’s surface transportation policies is needed. Since 2004, we have made several recommendations and matters for congressional consideration to address the need for a more goal-oriented approach to surface transportation, introduce greater performance and accountability for results, and break down modal stovepipes. The President’s fiscal year 2012 budget proposes to consolidate 55 highway programs into 5 core programs. Congressional reauthorization of surface transportation programs presents an opportunity to address our recommendations and matters for congressional consideration that have not been implemented in large part because the current multiyear authorization for surface transportation programs expired in 2009, and existing programs have been funded since then through temporary extensions. DOD-VA Electronic Health Record Systems: Although they have identified many common health care business needs, DOD and the Department of Veterans Affairs (VA) have spent large sums of money to develop and operate separate electronic health record systems that each department relies on to create and manage patient health information. Moreover, the results of a 2008 study conducted for the departments found that over 97 percent of functional requirements for an inpatient electronic health record system are common to both departments. Nevertheless, the departments have each begun multimillion dollar modernizations of their electronic health record systems. Specifically, DOD has obligated approximately $2 billion over the 13-year life of its Armed Forces Health Longitudinal Technology Application and requested $302 million in fiscal 2011 year funds for a new system. For its part, VA reported spending almost $600 million from 2001 to 2007 on eight projects as part of its Veterans Health Information Systems and Technology Architecture modernization. In April 2008, VA estimated an $11 billion total cost to complete the modernization by 2018. Reduced duplication in this area could save system development and operation costs while supporting higher-quality health care for service members and veterans. The departments’ distinct modernization efforts are due in part to barriers they face to jointly addressing their common health care system needs. These barriers stem from weaknesses in key IT management areas such as strategic planning and investment management. Our recent work identified several actions that the Secretaries of Defense and Veterans Affairs could take to overcome these barriers, including revising the departments’ joint strategic plan, further developing the departments’ joint health architecture, and defining and implementing a process for identifying and selecting joint IT investments to meet the departments’ common health care business needs. In March 2011, the Secretaries committed their respective departments to pursue joint development and acquisition of integrated electronic health records capabilities, including defining an architecture to guide the departments’ efforts. Further, in testimony before the Senate Veterans Affairs Committee on May 18, 2011, the departments’ Deputy Secretaries reaffirmed DOD’s and VA’s commitment to addressing the weaknesses we have noted in our work with regard to achieving these joint capabilities. We found that duplication and overlap occur for a variety of reasons. First, programs have been added incrementally over time to respond to new needs and challenges, without a strategy to minimize duplication, overlap, and fragmentation among them. Also, agencies often lack information on the effectiveness of programs; such information could help decision makers prioritize resources among programs. Lastly, there are not always interagency mechanisms or strategies in place to coordinate programs that address crosscutting issues, which can lead to potentially duplicative, overlapping and fragmented efforts. The recently enacted GPRA Modernization Act of 2010, which updates the almost two-decades-old Government Performance and Results Act, may help address some of these issues. The act establishes a new framework aimed at taking a more crosscutting and integrated approach to focusing on results and improving government performance. It requires the Office of Management and Budget (OMB), in coordination with agencies, to develop—every 4 years—long-term, outcome-oriented goals for a limited number of crosscutting policy areas. As a result, the act could also help inform reexamination or restructuring efforts and lead to more efficient and economical service delivery in overlapping program areas. The crosscutting planning and reporting requirements in the act could lead to the development of performance information in areas that are currently incomplete. The federal government’s expenditures on IT could be reduced by, among other things, consolidating federal data centers, improving investment management and oversight, and using enterprise architectures as a tool for organizational transformation. Each year the federal government spends billions of dollars on IT investments; federal spending on IT has risen to an estimated $79 billion for fiscal year 2011. In recent years, as federal agencies modernized their operations, put more of their services online, and increased their information security profiles they have demanded more computing power and data storage resources. While it may meet individual agency needs, this growth has raised concerns about duplicative investments and underutilized computing resources across the government. Over time, the federal government’s increasing demand for more IT has led to a dramatic rise in the number of federal data centers. According to OMB, the number of federal data centers grew from 432 in 1998 to more than 2,000 in July 2010. These data centers often house similar types of equipment and provide similar processing and storage capabilities. These factors have led to concerns about the costs associated with the provision of redundant capabilities, the underutilization of resources, and the significant consumption of energy. In 2010, the Federal Chief Information Officer (CIO) reported that operating and maintaining redundant infrastructure investments was costly, inefficient, and unsustainable, and had a significant impact on energy consumption. While the total annual federal spending associated with these data centers has not been determined, the Federal CIO has found that operating data centers is a significant cost to the federal government, including costs for hardware, software, real estate, and cooling costs. For example, according to the Environmental Protection Agency, the electricity cost to operate federal servers and data centers across the government is about $450 million annually. According to the Department of Energy, data center spaces can consume 100 to 200 times as much electricity as standard office spaces. In February 2010, OMB and the Federal CIO announced the Federal Data Center Consolidation Initiative and OMB outlined four high-level goals: Promote the use of Green IT by reducing the overall energy and real estate footprint of government data centers. Reduce the cost of data center hardware, software, and operations. Increase the overall IT security posture of the government. Shift IT investments to more efficient computing platforms and technologies. As part of this initiative, OMB directed federal agencies to prepare an inventory of their data center assets and a plan for consolidating these assets by August 30, 2010, and to begin implementing them in fiscal year 2011. In October 2010, OMB reported that all of the agencies had submitted their plans. OMB plans to monitor agencies’ progress through annual reports and has established a goal of closing 800 of the data centers by 2015. More recently, in April 2011, OMB announced plans to close 137 data centers by the end of this year. At your request, we are currently reviewing the Federal Data Center Consolidation Initiative as well as federal agencies’ efforts to develop and implement consolidation plans. In our draft report, which is currently with agencies in order to obtain their comments, we discuss our preliminary observations based on our review of 24 agencies’ consolidation plans. As part of their individual consolidation plans, each federal department and agency was expected to estimate cost savings over time. In their plans, 14 agencies reported expected savings totaling about $700 million between fiscal years 2011 and 2015; however, actual savings may be even higher because most of these agencies’ estimates were incomplete. For example, 11 agencies included expected energy savings and reductions in building operating costs, but did not include savings from other sources such as equipment reductions. Four other agencies did not expect to accrue any net savings by 2015 and six agencies did not provide estimated cost savings. Although some agencies reported that it was too soon to fully estimate cost savings because they are just beginning to plan for consolidation and other agencies noted that near-term savings were offset by consolidation costs, the opportunity for long-term savings is significant. In October 2010, a council of chief executive officers representing technical industry companies estimated that the federal government could save $150-$200 billion over the next decade, primarily through data center and server consolidation. In our draft report, we found that despite OMB’s requirements for what agencies should include in their asset inventories and consolidation plans, only one of the agencies submitted a complete asset inventory and none of the agencies submitted complete plans. For example, in their asset inventories, 14 agencies do not provide a complete listing of their data centers and 15 do not list all of their software assets. Similarly, in their consolidation plans, 13 agencies do not provide specific performance metrics and 12 do not address cost-benefit calculations. Until these inventories and plans are complete, agencies may not be able to fully implement their consolidation activities and realize expected savings. Further, we found that agencies identified multiple challenges during data center consolidation, including those that are cultural, funding-related, operational, and technical in nature. For example, agencies face challenges in overcoming cultural resistance to such major organizational changes, providing upfront funding for the consolidation effort before any cost savings accrue, maintaining current operations during the transition to consolidated operations, and establishing and implementing shared standards (for storage, systems, security, etc.). Mitigating these and other challenges will require commitment from the agencies and continued oversight by OMB and the Federal CIO. To help ensure that the federal data center consolidation initiative improves governmental efficiency and achieves cost savings, we are making recommendations to OMB and to the heads of the participating agencies. Specifically, we are recommending that agencies complete the missing elements in their plans and that OMB monitor the agencies’ completion and implementation of those plans to ensure that promised efficiencies and savings are realized. We also recommend that agencies consider challenges when updating their plans. Given the importance of transparency, oversight, and management of the government’s IT investments, in June 2009, OMB established a public Web site, referred to as the IT Dashboard, that provides detailed information on about 800 investments at 27 federal agencies, including ratings of their performance against cost and schedule targets. The public dissemination of this information is intended to allow OMB; other oversight bodies, including Congress; and the general public to hold agencies accountable for results and performance. Since our March report, we completed additional work and reported that by establishing the IT Dashboard, OMB has drawn additional attention to more than 300 troubled IT investments at federal agencies, totaling $20 billion, which is an improvement from the previously used oversight mechanisms. The Federal CIO recognized that the Dashboard has increased the accountability of agency CIOs and established much-needed visibility into investment performance. In a series of IT Dashboard reviews completed in July 2010 and March 2011, we reported that OMB’s Dashboard had increased transparency and oversight, but that improvements were needed for the Dashboard to more fully realize its potential as a management and cost-savings tool. Specifically, in reviews of selected investments from 10 agencies, we found that the Dashboard ratings were not always consistent with agency cost and schedule performance data. For example, the Dashboard rating for a Department of Homeland Security investment reported significant cost variances for 3 months in 2010; however, our analysis showed lesser variances for the same months. In another case, a Department of Justice investment on the Dashboard reported that it has been less than 30 days behind schedule from July 2009 through January 2010. Investment data that we examined, however, showed that the investment was behind schedule by 30 days to almost 90 days from September to December 2009. A primary reason for the data inaccuracies in the Dashboard’s ratings was that while the Dashboard was intended to represent near real-time performance information, the cost and schedule ratings did not take into consideration current performance. In these reports, we made a number of recommendations to OMB and federal agencies to improve the accuracy of Dashboard ratings. Agencies agreed with these recommendations, while OMB agreed with all but one. Specifically, OMB disagreed with the recommendation to change how it reflects current investment performance in its ratings because Dashboard data are updated on a monthly basis. However, we maintained that current investment performance may not always be as apparent as it should be; while data are updated monthly, ratings include historical data, which can mask more recent performance. OMB officials indicated they had relied on the Dashboard as a management tool, including using investment trend data to identify and address performance issues and to select investments for a TechStat session—a review of selected IT investments between OMB and agency leadership that is led by the Federal CIO. According to OMB, as of December 2010, 58 TechStat sessions had been held with federal agencies. Additionally, OMB officials stated that as a result of these sessions, 11 investments have been reduced in scope, and 4 have been cancelled. According to the Federal CIO, use of the Dashboard as a management and oversight tool has already resulted in a $3 billion budget reduction. OMB’s planned improvements to the Dashboard, along with full implementation of our recommendations and the possible identification of duplicative investments have the potential to result in further significant savings. Additional opportunities for potential cost savings exist with the use of the Dashboard by executive branch agencies to identify and make decisions about poorly performing investments, as well as its continued use by congressional committees to support critical oversight efforts. An enterprise architecture is a modernization blueprint that is used by organizations to describe their current state and a desired future state and to leverage IT to transform business and mission operations. Historically, federal agencies have struggled with operational environments characterized by a lack of integration among business operations and the IT resources that support them. A key to successfully leveraging IT for organizational transformation is having and using an enterprise architecture as an authoritative frame of reference against which to assess and decide how individual system investments are defined, designed, acquired, and developed. The development, implementation, and maintenance of architectures are widely recognized as hallmarks of successful public and private organizations, and their use is required by the Clinger-Cohen Act of 1996 and OMB. Our experience has shown that attempting to modernize (and maintain) IT environments without an architecture to guide and constrain investments results in organizational operations and supporting technology infrastructures and systems that are duplicative, poorly integrated, unnecessarily costly to maintain and interface, and unable to respond quickly to shifting environmental factors. For example, we have conducted reviews of enterprise architecture management at federal agencies, such as the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI), as well as reviews of critical agency functional areas, such as DOD financial management, logistics management, combat identification, and business systems modernization. In addition, as discussed earlier, we have reviewed the DOD and VA’s joint health architecture efforts, which are intended to guide identification and development of common health IT solutions. These reviews have continued to identify the absence of complete and enforced enterprise architectures, which in turn has led to agency business operations, systems, and data that are duplicative, incompatible, and not integrated. These conditions have either prevented agencies from sharing data or forced them to depend on expensive, custom-developed system interfaces to do so. For example, we previously reported that IT had been a long-standing problem for the FBI, with nonintegrated applications, residing on different servers, each of which had its own unique databases and did not share information with other applications or with other government agencies. As a result, these deficiencies served to significantly hamper the FBI’s ability to share important and time-sensitive information internally and externally with other intelligence and law enforcement agencies. In 2006, we reported that the state of enterprise architecture development and implementation varied considerably across departments and agencies, with some having more mature architecture programs than others. However, overall, most departments and agencies were not where they needed to be, particularly with regard to their approaches for assessing each investment’s alignment with the enterprise architecture and measuring and reporting on enterprise architecture results and outcomes. In our prior work, most departments and agencies reported they expect to realize benefits from their respective enterprise architecture programs, such as improved alignment between their business operations and the IT that supports these operations and consolidation of their IT infrastructure environments, which can reduce the costs of operating and maintaining duplicative capabilities, sometime in the future. What this suggests is that the real value in the federal government from developing and using enterprise architectures remains largely unrealized. Our recently issued seven-stage enterprise architecture management maturity framework recognizes that a key to realizing this potential is effectively managing department and agency enterprise architecture programs. However, knowing whether benefits and results are in fact being achieved requires having associated measures and metrics. In this regard, it is important for agencies to satisfy the core element of the framework—enterprise architecture results and outcomes are measured and reported. Examples of results and outcomes to be measured include costs avoided through eliminating duplicative investments or by reusing common services and applications and improved mission performance through re-engineered business processes and modernized supporting systems. Our work has shown that over 50 percent of the departments and agencies assessed had yet to fully satisfy this element. On the other hand, some have reported they are addressing this element and have realized significant financial benefits. For example, in 2006 we reported that the Department of the Interior had addressed all but one of the elements in our enterprise architecture management maturity framework, which meant that it was well–positioned to realize the significant benefits that a well-managed architecture program can provide. It has since demonstrated that it is using its enterprise architecture to modernize agency IT operations and avoid costs through enterprise software license agreements and hardware procurement consolidation. These architecture- based decisions have resulted in reported financial benefits of at least $80 million. This means that the departments and agencies can demonstrate achievement of expected benefits, including costs avoided through eliminating duplicative investments, if enterprise architecture results and outcomes are measured and reported. We have work under way to determine the extent to which federal departments and agencies are realizing value from their use of enterprise architectures. Notwithstanding these challenges, we have also reported on departments that have demonstrated improvements to their enterprise architecture programs. In 2009, we reported that to DHS’s credit, recent versions of its enterprise architecture largely addressed our prior recommendations aimed at adding needed architectural depth and breadth. For example, in response to our prior recommendation that the architecture include a Technical Reference Model that describes, among other things, the technical standards to be implemented for each enterprise service, the 2008 version of the enterprise architecture included a Technical Reference Model that identified such standards. The department also adopted an approach for extending the architecture through segments, which is a “divide and conquer” approach to architecture development advocated by OMB. However, we also concluded that while recent versions largely addressed our prior recommendations, important content, such as prioritized segments and information exchanges between critical business processes, was still missing. In addition, in response to our recommendations, DOD adopted a federated approach to developing and using its business enterprise architecture, which is a coherent family of parent and subsidiary architectures, to help modernize its nonintegrated and duplicative business operations and the systems that support them. According to DOD, the federated business enterprise architecture is expected to identify and provide for sharing common applications and systems across the department and its components and promote interoperability and data sharing among related programs. For example, the architecture now focuses on improving the department’s ability to manage business operations from an end-to-end perspective. In this regard, it depicts 15 end-to-end business processes, such as hire-to-retire and procure-to-pay. In addition, it also identifies the corporate architectural policies, capabilities, rules, and standards that apply DOD-wide. While this is important progress, DOD has yet to define these end-to-end processes at a lower level so that any redundant or duplicative system functions can be identified and avoided. To advance the state of enterprise architecture development and use in the federal government, senior leadership in the departments and agencies need to demonstrate their commitment to this organizational transformation tool, as well as ensure that the kind of management controls embodied in our framework are in place and functioning. Collectively, the majority of the departments’ and agencies’ architecture efforts can still be viewed as a work in progress with much remaining to be accomplished before the federal government as a whole fully realizes their transformational value. Moving beyond this status will require most departments and agencies to overcome significant obstacles and challenges, such as organizational parochialism and cultural resistance, inadequate funding, and the lack of top management understanding and skilled staff. One key to doing so continues to be sustained organizational leadership. As our work has demonstrated, without such organizational leadership, the benefits of enterprise architecture will not be fully realized. OMB can play a critical role by continuing to oversee the development and use of enterprise architecture efforts, including measuring and reporting enterprise architecture results and outcomes across the federal government. The federal government spent about $535 billion in fiscal year 2010 acquiring the goods and services agencies need to carry out their missions. Our March report highlighted four areas where improvements could be made to realize significant savings. These are: (1) minimizing unnecessary duplication among interagency contracts, (2) achieving more competition in the award of contracts, (3) using award fees more appropriately to promote improved contractor performance, and (4) leveraging the government’s vast buying power through expanded use of strategic sourcing. Interagency contracting is a process by which one agency either uses another agency’s contract directly or obtains contracting support services from another agency. In recent years, interagency and agencywide contracting accounts for more than $50 billion in procurement spending annually. Agencies have created numerous interagency and agencywide contracts using existing statutes, the Federal Acquisition Regulation, and agency-specific policies. With the proliferation of these contracts, however, there is a risk of unintended duplication and inefficiency. Billions of taxpayer dollars flow through interagency and agencywide contracts, but the federal government does not have a clear, comprehensive view of which agencies use these contracts and whether they are being used in an efficient and effective manner. Without this information, agencies may be unaware of existing contract options that could meet their needs and may be awarding new contracts when use of an existing contract would suffice. The government, therefore, might be missing opportunities to better leverage its vast buying power. Government contracting officials and representatives of vendors have expressed concerns about potential duplication among the interagency and agencywide contracts across government, which they said can result in increased procurement costs, redundant buying capacity, and an increased workload for the acquisition workforce. Some vendors stated they offer similar products and services on multiple contracts and that the effort required to be on multiple contracts results in extra costs to the vendor, which they pass to the government through increased prices. Some vendors stated that the additional cost of being on multiple contracts ranged from $10,000 to $1,000,000 per contract due to increased bid and proposal and administrative costs. We identified two overriding factors that hamper the government’s ability to realize the strategic value of using interagency and agencywide contracts: (1) the absence of consistent governmentwide policy on the creation, use, and costs of awarding and administering some contracts; and (2) long-standing problems with the quality of information on interagency and agencywide contracts in the federal procurement data system. In April 2010, we recommended that OMB, which has governmentwide procurement policy responsibilities, establish a policy framework for establishing some types of interagency contracts and agencywide contracts, including a requirement to conduct a sound business case. We also recommended that OMB take steps to improve the data on interagency contracts including updating existing data on interagency and agencywide contracts, ensuring that departments and agencies accurately record these data, and assessing the feasibility of creating and maintaining a centralized database of interagency and agencywide contracts. OMB agreed with our recommendations. In December 2010, the Federal Acquisition Regulation was amended to require that agencies prepare business cases for some multiagency contracts. This business case analysis also requires that agencies evaluate the cost of awarding and managing the contract and compare this cost to the likely fees that would be incurred if the agency used an existing contract or sought out acquisition assistance. In addition, OMB is developing additional business case guidance that will require agencies to prepare business cases describing the expected need for any new multiagency or agencywide contract, the value added by its creation, and the agency’s suitability to serve as an executive agent. OMB also reports that it has a new effort under way to improve contract information in the Federal Procurement Data System-Next Generation, the current federal government database for information and data on all federal contracts. OMB also is discussing options for creating a clearinghouse of existing interagency and agencywide contracts. Requiring business case analyses for new multiagency and agencywide contracts and ensuring agencies have access to up-to-date and accurate data on the available contracts will promote the efficient use of interagency and agencywide contracting. Until such controls to address the issue of duplication are fully implemented, the government will continue to miss opportunities to take advantage of the government’s buying power through more efficient and more strategic contracting. Competition is a cornerstone of the federal acquisition system and a critical tool for achieving the best possible return on contract spending. Competitive contracts can save money, improve contractor performance, and promote accountability for results. Federal agencies generally are required to award contracts competitively, but a substantial amount of federal money is being obligated on noncompetitive contracts annually. Federal agencies obligated approximately $170 billion on noncompetitive contracts in fiscal year 2009 alone. While there has been some fluctuation over the years, the percentage of obligations under noncompetitive contracts recently has been in the range of 31 percent to over 35 percent. Although some agency decisions to forego competition may be justified, we found that when federal agencies decide to open their contracts to competition, they frequently realize savings. For example, we found in 2006 that the Army had awarded noncompetitive contracts for security guards, but later spent 25 percent less for the same services when the contracts were competed. Our work also shows that agencies do not always use a competitive process when establishing or using blanket purchase agreements under the General Services Administration’s schedules program. Agencies have frequently entered into blanket purchase agreements with just one vendor, even though multiple vendors could satisfy agency needs. And even when agencies entered into blanket purchase agreements with multiple vendors, we found that agencies have not always held subsequent competitions among those vendors for orders under the blanket purchase agreements, even though such competitions at the ordering level are required. OMB has provided guidance for agencies to promote competition in contracting, and improve the effectiveness of their competition practices. In July 2009, OMB called for agencies to reduce obligations under new contract actions that are awarded using high-risk contracting authorities by 10 percent in fiscal year 2010. These high-risk contracts include, among other considerations, those that are awarded noncompetitively and those that are structured as competitive but for which only one offer is received. We are currently reviewing the agencies’ savings plans to identify steps taken toward that goal. By more consistently promoting competition in contracts, federal agencies would have greater opportunities to take advantage of the effectiveness of the marketplace and potentially achieve billions of dollars in cost savings. Several major agencies spent over $300 billion from fiscal year 2004 through fiscal year 2008 on contracts that included monetary incentives known as award fees. The purpose of these incentives is to motivate enhanced contractor performance. In 2005, however, we found that DOD paid billions of dollars in award fees regardless of acquisition outcomes. In 2007, we found significant disconnects between program results and fees paid at the National Aeronautics and Space Administration. In 2009, we reported that five agencies had paid more than $6 billion in award fees, but were not consistently following award fee guidance and did not have methods for evaluating the effectiveness of an award fee as a tool for improving contractor performance. We identified three primary issues related to the use of award fees that, if addressed, could improve the use of these incentives and produce savings. Specifically, (1) award fees are not always linked to acquisition outcomes, (2) award fee payments are made despite unsatisfactory contract performance, and (3) contractors have been permitted to earn previously unearned award fees in subsequent evaluation periods, a practice known as “rollover,” where unearned award fees are transferred from one evaluation period to a subsequent period, thus allowing contractors additional opportunities to earn previously unearned fees. Although required by OMB guidance since 2007, we reported in 2009 that award fees were not always linked to acquisition outcomes. But when efforts are made to do so, savings can be achieved. For example, the Joint Strike Fighter program created metrics for areas such as software performance, warfighter capability, and cost control that were previously assessed using less-defined criteria. By using metrics to assess performance, the Joint Strike Fighter program paid an estimated $29 million less in fees in the 2 years since the policy changed than it might have when applying the former criteria. OMB’s 2007 guidance directed agencies to ensure that no award fee should be paid for performance that does not meet contract requirements or is judged to be unsatisfactory. We reported in 2009 that programs across the agencies reviewed used evaluation tools that could allow contractors to earn award fees without performing at a level that is acceptable to the government under the terms of the contract. For example, a Department of Energy research contract allowed the contractor to earn up to 84 percent of the award fee for performance that was defined as not meeting expectations. In addition, we found two HHS contracts, including a contract for Medicare claims processing, in which it was possible for the contractor to receive at least 49 percent of the award fee for unsatisfactory performance. By contrast, some programs within DOD have prohibited award fee payments for unsatisfactory performance. For example, we found that the Air Force saved $10 million on a contract for a satellite program by not paying an award fee to a contractor with unsatisfactory performance. DOD guidance on award fees since 2006 has been that the practice of rollover should be limited to exceptional circumstances to avoid compromising the integrity of the award fee process. We found that based on contracts reviewed in 2005, DOD rolled over an average of 51 percent of the total unearned fees. For example, the contractor for the F-22 Raptor received over 90 percent of the award fee, including fees paid in subsequent evaluation periods, even though the program’s cost and schedule targets had to be revised 14 times. By later limiting rollover, we estimated in 2009 that DOD would save over $450 million on eight programs from April 2006 through October 2010. Recent changes to the Federal Acquisition Regulation in 2010 have prohibited the practices of rollover of unearned award fees and awarding fees to contractors that have performed unsatisfactorily. Some agencies are updating and disseminating guidance that could increase the pace and success rate of implementing these new regulations. Further, agencies such as DOD are increasing the likelihood that award fees would be better linked to acquisition outcomes by implementing key practices, like a peer review process that examines the plan for administering award fees. However, sustained progress in the use of award fees will require that contracting agencies adhere to the recent changes to the Federal Acquisition Regulation. Enhanced oversight by OMB and Congress is warranted to ensure successful implementation. Since 2002, spending on federal contracts has more than doubled to about $540 billion in 2009, consuming a significant share of agencies’ discretionary budgets. Because procurement at federal departments and agencies generally is decentralized, the federal government is not fully leveraging its aggregate buying power to obtain the most advantageous terms and conditions for its procurements. In the private sector, however, an approach called strategic sourcing has been used since the 1980s to reduce procurement costs at companies with large supplier bases and high procurement costs. We reported that to reduce costs, improve productivity, and more effectively procure products and services, many companies have adopted a strategic sourcing approach—centralizing and reorganizing their procurement operations to get the best value for the company as a whole. The federal government could do the same and realize significant savings as a result. Since 2005, OMB has encouraged agencies to coordinate their buys through Federal Strategic Sourcing Initiative (FSSI) interagency procurement vehicles awarded by the General Services Administration. In addition, some agencies have awarded agencywide (also referred to as enterprisewide) contracts under strategic sourcing programs within an individual federal department or agency. In July 2010, OMB’s congressional testimony on the status of improvements to federal acquisition cited examples of what progress is being achieved under agency strategic sourcing efforts. Under the FSSI effort for example, a team of agencies selected office products in late 2009 as a promising strategic sourcing opportunity to combine buying power for about $250 million in requirements. This office products initiative is expected to reduce costs at these agencies by as much as 20 percent, for a total savings of almost $200 million over the next 4 years. Further, an agencywide initiative at the Department of Homeland Security—which accounted for $14.3 billion in contract spending in 2009—is expected to save $87 million during the next 6 years for a standardized suite of discounted desktop operating systems, e-mail, and office automation products. These results demonstrate the potential to achieve significant savings through the use of strategic sourcing approaches. The starting point for such efforts, however, is having good data on current spending, but in April 2010 we reported that OMB and agencies cannot be sure the government is fully leveraging its buying power because of the absence of comprehensive, reliable data to effectively manage and oversee an important segment of total procurement spending: interagency and agencywide contracts. Acquisition leaders across the government need to more fully embrace the strategic sourcing initiative beginning with collecting, maintaining, and analyzing data on current procurement spending. Then, agencies have to conduct assessments of acquisition and supply chain functions to initiate enterprisewide transformations. In conclusion Mr. Chairman, Ranking Member Senator Collins, and Members of the Committee, careful, thoughtful actions will be needed to address many of the issues discussed in our March report, particularly those involving potential duplication, overlap, and fragmentation among federal programs and activities. These are difficult issues to address because they may require agencies and Congress to re-examine within and across various mission areas the fundamental structure, operation, funding, and performance of a number of long-standing federal programs or activities with entrenched constituencies. Continued oversight by OMB and Congress will be critical to ensuring that unnecessary duplication, overlap, and fragmentation are addressed. As the nation rises to meet the current fiscal challenges, we will continue to assist Congress and federal agencies in identifying actions needed to reduce duplication, overlap, and fragmentation; achieve cost savings; and enhance revenues. As part of current planning for our future annual reports, we are continuing to look at additional federal programs and activities to identify further instances of duplication, overlap, and fragmentation as well as other opportunities to reduce the cost of government operations and increase revenues to the government. We will be using an approach to ensure governmentwide coverage through our efforts by the time we issue of our third report in fiscal year 2013. We plan to expand our work to more comprehensively examine areas where a mix of federal approaches is used, such as tax expenditures, direct spending, and federal loan programs. Likewise, we will continue to monitor developments in the areas we have already identified. Issues of duplication, overlap, and fragmentation will also be addressed in our routine audit work during the year as appropriate and summarized in our annual reports. Thank you, Mr. Chairman, Ranking Member Senator Collins, and Members of the Committee. This concludes my prepared statement. I would be pleased to answer any questions you may have. For further information on this testimony or our March report, please contact Janet St. Laurent, Managing Director, Defense Capabilities and Management, who may be reached at (202) 512-4300, or StLaurentJ@gao.gov; and Katherine Siggerud, Managing Director, Physical Infrastructure, who may be reached at (202) 512-2834, or SiggerudK@gao.gov. Specific questions about information technology issues may be directed to Joel Willemssen, Managing Director, Information Technology, who may be reached at (202) 512-6253, or WillemssenJ@gao.gov. Questions about federal contracting may be directed to Paul Francis, Managing Director, Acquisition and Sourcing Management, who may be reached at (202) 512-4841, or FrancisP@gao.gov. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. DOD and the Department of Veterans Affairs (VA) Two bureaus within the Department of State (State) Department of the Treasury’s (Treasury) Internal Revenue Service (IRS) Department of Health and Human Services’ Centers for Medicare & Medicaid Services (CMS) Department of Homeland Security (DHS) Transportation Security Administration (TSA) DHS’s Customs and Border Protection (CBP) collections could produce a one-time savings of $640 million 80. Social Security needs data on pensions from noncovered earnings to better enforce offsets and ensure benefit fairness, resulting in estimated $2.4-$2.9 billion savings over 10 years 81. Congress could pursue several options to improve collection of antidumping and countervailing duties This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses our first annual report to Congress responding to the statutory requirement that GAO identify federal programs, agencies, offices, and initiatives--either within departments or governmentwide--that have duplicative goals or activities. This work can help inform government policymakers as they address the rapidly building fiscal pressures facing our national government. Our simulations of the federal government's fiscal outlook show continually increasing levels of debt that are unsustainable over time, absent changes in the federal government's current fiscal policies. Since the end of the recent recession, the gross domestic product has grown slowly, and unemployment has remained at a high level. While the economy is still recovering and in need of careful attention, widespread agreement exists on the need to look not only at the near term but also at steps that begin to change the long-term fiscal path as soon as possible. With the passage of time, the window to address the fiscal challenge narrows and the magnitude of the required changes grows. This testimony is based on our March 2011 report and provides an overview of federal programs or functional areas where unnecessary duplication, overlap, or fragmentation exists and where there are other opportunities for potential cost savings or enhanced revenues. In that report, we identified 81 areas for consideration--34 areas of potential duplication, overlap, or fragmentation and 47 additional areas describing other opportunities for agencies or Congress to consider taking action that could either reduce the cost of government operations or enhance revenue collections for the Treasury. The 81 areas span a range of federal government missions such as agriculture, defense, economic development, energy, general government, health, homeland security, international affairs, and social services. Within and across these missions, the report touches on hundreds of federal programs, affecting virtually all major federal departments and agencies. The testimony highlights (1) some examples from our March report; (2) needed improvements in the federal government's management and investment in information technology (IT); and (3) opportunities for achieving significant cost savings through improvements in government contracting. A few examples of duplication: (1) Teacher quality programs: In fiscal year 2009, the federal government spent over $4 billion specifically to improve the quality of our nation's 3 million teachers through numerous programs across the government. Federal efforts to improve teacher quality have led to the creation and expansion of a variety of programs across the federal government, however, there is no governmentwide strategy to minimize fragmentation, overlap, or duplication among these many programs. (2) Military health system: The Department of Defense's (DOD) Military Health System (MHS) costs have more than doubled from $19 billion in fiscal year 2001 to $49 billion in 2010 and are expected to increase to over $62 billion by 2015. The responsibilities and authorities for the MHS are distributed among several organizations within DOD with no central command authority or single entity accountable for minimizing costs and achieving efficiencies. Under the MHS's current command structure, the Office of the Assistant Secretary of Defense for Health Affairs, the Army, the Navy, and the Air Force each has its own headquarters and associated support functions. (3) Employment and training programs: In fiscal year 2009, 47 federally funded employment and training programs spent about $18 billion to provide services, such as job search and job counseling, to program participants. Most of these programs are administered by the Departments of Labor, Education, and Health and Human Services (HHS). Forty-four of the 47 programs we identified, including those with broader missions such as multipurpose block grants, overlap with at least one other program in that they provide at least one similar service to a similar population. (4) Surface transportation: The Department of Transportation (DOT) currently administers scores of surface transportation programs costing over $58 billion annually. The current federal approach to surface transportation was established in 1956 to build the Interstate Highway System, but has not evolved to reflect current national priorities and concerns. Over the years, in response to changing transportation, environmental, and societal goals, federal surface transportation programs grew in number and complexity to encompass broader goals, more programs, and a variety of program approaches and grant structures. (5) DOD-VA Electronic Health Record Systems: Although they have identified many common health care business needs, DOD and the Department of Veterans Affairs (VA) have spent large sums of money to develop and operate separate electronic health record systems that each department relies on to create and manage patient health information. The federal government's expenditures on IT could be reduced by, among other things, consolidating federal data centers, improving investment management and oversight, and using enterprise architectures as a tool for organizational transformation. Each year the federal government spends billions of dollars on IT investments; federal spending on IT has risen to an estimated $79 billion for fiscal year 2011. In recent years, as federal agencies modernized their operations, put more of their services online, and increased their information security profiles they have demanded more computing power and data storage resources. The federal government spent about $535 billion in fiscal year 2010 acquiring the goods and services agencies need to carry out their missions. Areas where improvements could be made to realize significant savings: (1) minimizing unnecessary duplication among interagency contracts, (2) achieving more competition in the award of contracts, (3) using award fees more appropriately to promote improved contractor performance, and (4) leveraging the government's vast buying power through expanded use of strategic sourcing.
Our investigator was easily able to obtain four genuine U.S. passports using counterfeit or fraudulently obtained documents. In the most egregious case, our investigator obtained a U.S. passport using counterfeit documents and the SSN of a man who died in 1965. In another case, our undercover investigator obtained a U.S. passport using counterfeit documents and the genuine SSN of a fictitious 5-year-old child—even though his counterfeit documents and application indicated he was 53 years old. State and USPS employees did not identify our documents as counterfeit in any of our four tests. Although we do not know what checks, if any, State performed when approving our fraudulent applications, it issued a genuine U.S. passport in each case. All four passports were issued to the same GAO investigator, under four different names. Our tests show a variety of ways that malicious individuals with even minimal counterfeiting capabilities and access to another person’s identity could obtain genuine U.S. passports using counterfeit or fraudulently obtained documents. Table 1 below shows the month of passport application, type of counterfeit or fraudulently obtained documents used, and the number of days that passed between passport application and passport issuance for each of our four tests. Our investigator used a genuine U.S. passport obtained by using counterfeit or fraudulently obtained documents to pass through airport security. In January 2009, our investigator purchased an airline ticket for a domestic flight using the fictitious name from one of our test scenarios. He then used the fraudulently obtained passport from that test as proof of identity to check in to his flight, get a boarding pass, and pass through the security checkpoint at a major metropolitan-area airport. Figure 1 below shows the boarding pass. After our investigator successfully passed through the checkpoint, he left the airport and cancelled his airline ticket. Our first test found that USPS did not detect the counterfeit West Virginia driver’s license our undercover investigator presented as proof of his identity during the passport application process. Further, State issued our investigator a genuine U.S. passport despite the counterfeit New York birth certificate he used as proof of his U.S. citizenship. In July 2008, our investigator entered a USPS office in Virginia and approached a USPS employee to apply for a passport. The USPS employee greeted the investigator and took his application form, counterfeit New York birth certificate, and counterfeit West Virginia driver’s license. On these documents, we used a fictitious identity and a SSN that we had previously obtained from SSA for the purpose of conducting undercover tests. The USPS employee reviewed the application materials line by line to make sure that the information on the application form matched the information on the birth certificate and driver’s license. After she completed her review of the application materials, the USPS employee took the investigator’s birth certificate and funds to pay for the application fee. She administered an oath, and then told the investigator that he should receive the passport within 1 to 4 weeks. State issued a genuine U.S. passport to our undercover investigator 8 days after he submitted his application. About a week after State issued the passport, it arrived at the mailing address indicated on the application materials. Our second test found that State did not detect a counterfeit New York birth certificate our undercover investigator presented to prove his U.S. citizenship in support of a passport application. In July 2008, our investigator—the same investigator as in the first test above—obtained a genuine identification card from the Washington, D.C., Department of Motor Vehicles using counterfeit documents, which he then used to apply for a U.S. passport. In August 2008, the same investigator entered State’s regional Washington, D.C., passport-issuing office with a completed passport application form, a counterfeit New York birth certificate, the genuine D.C. identification card, two passport photographs, sufficient funds to pay for the application fee, and an electronic ticket (e-ticket) confirming that he had a flight to Germany. For this test, we used a fictitious identity and SSN that we had previously obtained from SSA for the purpose of conducting undercover investigations. The investigator presented his application form and materials to a State employee, who went line by line through the application form matching the information to the accompanying documents. The State employee provided the investigator with a number and instructed him to wait until his number was called. After his number was called, the investigator proceeded to a window to speak with another State employee. The second employee looked over his materials to make sure that he had all of the necessary documentation, took his birth certificate and money, and administered an oath. A few days later, the investigator returned to the same passport facility and picked up his passport. State issued the passport on the same day that the investigator submitted his application, in the fictitious name presented on the fraudulently obtained and counterfeit documents. As with our first test, our third test found that USPS did not detect the counterfeit West Virginia driver’s license our undercover investigator presented as proof of his identity during the passport application process. Further, State issued our investigator a genuine U.S. passport based on a counterfeit New York birth certificate as proof of U.S. citizenship. In October 2008, our investigator—the same investigator as in the first two tests mentioned above—entered a USPS office in Maryland to apply for a U.S. passport. A USPS employee greeted the investigator and took his application form, as well as his counterfeit New York birth certificate and counterfeit West Virginia driver’s license. The application materials used the name and genuine SSN of a fictitious 5-year-old child, which we obtained from a previous investigation. However, our investigator listed his age as 53 on the application materials, which clearly did not match the date of birth associated with the SSN used in this test. The USPS employee reviewed the application materials, including meticulously matching the information on the application form to the birth certificate and driver’s license. After she completed her review of the application materials, the USPS employee took the investigator’s birth certificate and payment for the application fee, administered an oath, and told the investigator that he should receive the passport within 2 to 4 weeks. State issued a genuine U.S. passport to our investigator, in the fictitious name based on the counterfeit documents, 7 days after he submitted his application. The passport arrived at the mailing address indicated by our investigator on the application materials a few days after its issuance. As with our first and third tests, our fourth test found that USPS did not detect the counterfeit identification document—a bogus Florida driver’s license—our undercover investigator presented to support his passport application. Further, State issued our investigator a genuine U.S. passport based on a counterfeit New York birth certificate as proof of U.S. citizenship. In December 2008, our investigator—the same investigator as in the three tests mentioned above—entered a USPS office in Maryland to apply for a U.S. passport. A USPS employee greeted the investigator and took his application form, as well as his counterfeit New York birth certificate and counterfeit Florida driver’s license. His application materials used a name and SSN that was issued to a person who died in 1965, and who would have been 59 years old at the time of our test had he still been alive. The USPS employee reviewed the application materials, matching the information on the application form to the birth certificate and driver’s license. After completing the review of application materials, the USPS employee took the investigator’s birth certificate and funds to pay for the application fee. The USPS employee administered an oath then told the investigator that he should receive the passport within 4 to 6 weeks. Four days after our investigator submitted his application, State issued a genuine U.S. passport in the fictitious name presented on the counterfeit documents. The passport arrived at the mailing address indicated by our investigator on the application materials. We briefed State officials on the results of our investigation. They agreed that our findings expose a major vulnerability in State’s passport issuance process. According to State officials, the department’s ability to verify the information submitted as part of a passport application is hampered by limitations to its information sharing and data access with other agencies at the federal and state levels. They said that some federal agencies limit State’s access to their records due to privacy concerns or the fact that State is not a law enforcement agency. In addition, they said that State does not currently have the ability to conduct real-time verification of the authenticity of birth certificates presented by passport applicants. They added that birth certificates present an exceptional challenge to fraud detection efforts, as there are currently thousands of different acceptable formats for birth certificates. Further, they indicated that there are difficulties with verifying the authenticity of drivers’ licenses. Moreover, they said that although State attempts to verify SSN information submitted on passport applications on a daily basis with SSA, the results of this data- sharing process are imperfect. For example, State officials said that many of the mismatches identified through this verification process are actually due to typos or other common errors. However, they said that while these data checks may not identify all cases in which an applicant’s data do not match the information in SSA’s records, in some instances—such as cases in which an SSN is tied to a deceased individual—investigators from the Passport Fraud Branch of State’s Bureau of Diplomatic Security will attempt to check publicly available databases to resolve the mismatch. State officials acknowledged that they have issued other fraudulently obtained passports but did not offer an estimate of the magnitude of the problem. In order to improve State’s current passport fraud detection capabilities, officials said that State would need greater cooperation from other agencies at both the federal and state levels, and the ability to access other agencies’ records in real time. Subsequent to our briefing, State officials informed us that they identified and revoked our four fraudulently obtained U.S. passports, and that they would study the matter further to determine what steps would be appropriate to improve passport issuance procedures. We did not verify the accuracy of these State officials’ statements. We also briefed a representative of USPS on the results of our investigation, who did not offer any comments at the time of our briefing. We are sending copies of this report to the Secretary of State and other interested parties. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-6722 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report.
A genuine U.S. passport is a vital document, permitting its owner to travel freely in and out of the United States, prove U.S. citizenship, obtain further identification documents, and set up bank accounts, among other things. Unfortunately, a terrorist or other criminal could take advantage of these benefits by fraudulently obtaining a genuine U.S. passport from the Department of State (State). There are many ways that malicious individuals could fraudulently obtain a genuine U.S. passport, including stealing an American citizen's identity and counterfeiting or fraudulently obtaining identification or citizenship documents to meet State requirements. GAO was asked to proactively test the effectiveness of State's passport issuance process to determine whether the process is vulnerable to fraud. To do so, GAO designed four test scenarios that simulated the actions of a malicious individual who had access to an American citizen's personal identity information. GAO created counterfeit documents for four fictitious or deceased individuals using off-the-shelf, commercially available hardware, software, and materials. An undercover GAO investigator then applied for passports at three United States Postal Service (USPS) locations and a State-run passport office. GAO's investigation shows that terrorists or criminals could steal an American citizen's identity, use basic counterfeiting skills to create fraudulent documentation for that identity, and obtain a genuine U.S. passport from State. GAO conducted four tests simulating this approach and was successful in obtaining a genuine U.S. passport in each case. In the most egregious case, an undercover GAO investigator obtained a passport using counterfeit documents and the Social Security Number (SSN) of a man who died in 1965. In another case, the investigator obtained a passport using counterfeit documents and the genuine SSN of a fictitious 5-year-old child GAO created for a previous investigation--even though the investigator's counterfeit documents and application indicated he was 53 years old. All four passports were issued to the same GAO investigator, under four different names. In all four tests, GAO used counterfeit and/or fraudulently obtained documents. State and USPS employees did not identify GAO's documents as counterfeit. GAO's investigator later purchased an airline ticket under the name used on one of the four fraudulently obtained U.S. passports, and then used that passport as proof of identity to check in to his flight, get a boarding pass, and pass through the security checkpoint at a major metropolitan-area airport. At a briefing on the results of GAO's investigation, State officials agreed with GAO that the investigation exposes a major vulnerability in State's passport issuance process. According to State officials, State's fraud detection efforts are hampered by limitations to its information sharing and data access with other federal and state agencies. After GAO's briefing, State officials notified GAO that they identified and revoked GAO's four fraudulently obtained U.S. passports, and were studying the matter to determine the appropriate steps for improving State's passport issuance process.
BIE’s Indian education programs derive from the federal government’s trust responsibility to Indian tribes, a responsibility established in federal statutes, treaties, court decisions, and executive actions. It is the policy of the United States to fulfill this trust responsibility for educating Indian children by working with tribes to ensure that education programs are of the highest quality, among other things. In accordance with this trust responsibility, Interior is responsible for providing a safe and healthy environment for students to learn. BIE’s mission is to provide Indian students with quality education opportunities. Students attending BIE schools generally must be members of federally recognized Indian tribes, or descendants of members of such tribes, and reside on or near federal Indian reservations. All BIE schools—both tribally-operated and BIE-operated—receive almost all of their funding to operate from federal sources, namely, Interior and Education. Specifically, these elementary and secondary schools received approximately $830 million in fiscal year 2014—including about 75 percent, or about $622 million, from Interior and about 24 percent, or approximately $197 million, from Education. BIE schools also received small amounts of funding from other federal agencies (about 1 percent), mainly the Department of Agriculture, which provides reduced-price or free school meals for eligible low-income children. (See fig. 1). While BIE schools are primarily funded through Interior, they receive annual formula grants from Education, similar to public schools. Specifically, schools receive Education funds under Title I, Part A of the Elementary and Secondary Education Act (ESEA) of 1965, as amended, and the Individuals with Disabilities Education Act. Title I—the largest funding source for kindergarten through grade 12 under ESEA—provides funding to expand and improve educational programs in schools with students from low-income families and may be used for supplemental services to improve student achievement, such as instruction in reading and mathematics. An Education study published in 2012 found that all BIE schools were eligible for Title I funding on a school-wide basis because they all had at least 40 percent of children from low-income households in school year 2009-10. Further, BIE schools receive Individuals with Disabilities Education Act funding for special education and related services, such as physical therapy or speech therapy. BIE schools tend to have a higher percent of students with special needs than students in public schools nationally. BIE schools’ educational functions are primarily the responsibility of BIE, while their administrative functions are divided mainly between two other Interior offices. The Bureau of Indian Education develops educational policies and procedures, supervises program activities, and approves schools’ expenditures. Three Associate Deputy Directors are responsible for overseeing multiple BIE local education offices that work directly with schools to provide technical assistance. Some BIE local offices also have their own facility managers that serve schools overseen by the office. The Office of the Deputy Assistant Secretary of Management oversees many of BIE’s administrative functions, including acquisitions and contract services, financial management, budget formulation, and property management. This office is also responsible for developing policies and procedures and providing technical assistance and funding to Bureau of Indian Affairs (BIA) regions and BIE schools to address their facility needs. Professional staff in this division—including engineers, architects, facility managers, and support personnel—are tasked with providing expertise in all facets of the facility management process. The Bureau of Indian Affairs administers a broad array of social services and other supports to tribes at the regional level. Regarding school facility management, BIA oversees the day-to-day implementation and administration of school facility construction and repair projects through its regional field offices. Currently there are 12 regional offices, and 9 of them have facility management responsibilities.health and safety inspections to ensure compliance with relevant requirements and providing technical assistance to BIE schools on facility issues. To determine how student performance at BIE schools compares to that of public school students, we reviewed data on student performance for 4th and 8th grades at BIE and public schools for 2005 to 2011 using data from the National Assessment of Educational Progress, a project of Education. Since 1969, these assessments have been conducted periodically in various subjects, including reading and mathematics. Further, these assessments are administered uniformly across the nation, and the results serve as a common metric for all states and selected urban districts. Indian Affairs’ administration of BIE schools—which has undergone multiple realignments over the past 10 years—is fragmented. In addition to BIE, multiple offices within BIA and the Office of the Deputy Assistant Secretary of Management have responsibilities for educational and administrative functions for BIE schools. Notably, when the Assistant Secretary for Indian Affairs was asked at a February 2015 hearing to clarify the responsibilities that various offices have over BIE schools, he responded that the current structure is “a big part of the problem” and that the agency is currently in the process of realigning the responsibilities various entities have with regard to Indian education, adding that it is a challenging and evolving process. Indian Affairs provided us with a chart on offices with a role in supporting and overseeing just BIE school facilities that shows numerous offices across three organizational divisions. (See fig. 4.) The administration of BIE schools has undergone several reorganizations over the years to address persistent concerns with operational effectiveness and efficiency. In our 2013 report, we noted that for a brief period from 2002 to 2003, BIE was responsible for its own administrative functions, according to BIE officials. However, in 2004 its administrative functions were centralized under the Office of the Deputy Assistant Secretary for Management. More recently, in 2013 Indian Affairs implemented a plan to decentralize some administrative responsibilities for schools, delegating certain functions to BIA regions. Further, in June 2014, the Secretary of the Interior issued an order to restructure BIE by the start of school year 2014-15 to centralize the administration of schools, decentralize services to schools, and increase the capacity of tribes to directly operate them, among other goals. Currently, Indian Affairs’ restructuring of BIE is ongoing. In our 2013 report, we found that the challenges associated with the fragmented administration of BIE schools were compounded by repeated turnover in leadership over the years, including frequent changes in the tenure of acting and permanent assistant secretaries of Indian Affairs from 2000 through 2013. We also noted that frequent leadership changes may complicate efforts to improve student achievement and negatively affect an agency’s ability to sustain focus on key initiatives. Indian Affairs’ administration of BIE schools has also been undermined by the lack of a strategic plan for guiding its restructuring of BIE’s administrative functions and carrying out BIE’s mission to improve education for Indian students. We previously found that key practices for organizational change suggest that effective implementation of a results- oriented framework, such as a strategic plan, requires agencies to clearly establish and communicate performance goals, measure progress toward those goals, determine strategies and resources to effectively accomplish the goals, and use performance information to make the decisions necessary to improve performance.BIE officials said that developing a strategic plan would help its leadership and staff pursue goals and collaborate effectively to achieve them. Indian Affairs agreed with our recommendation to develop such a plan and We noted in our 2013 report that recently reported it had taken steps to do so. However, the plan has yet to be finalized. Fragmented administration of schools may also contribute to delays in providing materials and services to schools. For example, our previous work found that the Office of the Deputy Assistant Secretary for Management’s lack of knowledge about the schools’ needs and expertise in relevant education laws and regulations resulted in critical delays in procuring and delivering school materials and supplies, such as textbooks. In another instance, we found that the Office of the Deputy Assistant Secretary for Management’s processes led to an experienced speech therapist’s contract being terminated at a BIE school in favor of a less expensive contract with another therapist. However, because the new therapist was located in a different state and could not travel to the school, the school was unable to fully implement students’ individualized education programs in the timeframe required by the Individuals with Disabilities Education Act. In addition, although BIE accounted for approximately 34 percent of Indian Affairs’ budget, several BIE officials reported that improving student performance was often overshadowed by other agency priorities. This hindered Indian Affairs’ staff from seeking and acquiring expertise in education issues. In our 2013 report, we also found that poor communication among Indian Affairs offices and with schools about educational services and facilities undermines administration of BIE schools. According to school officials we interviewed, communication between Indian Affairs’ leadership and BIE is weak, resulting in confusion about policies and procedures. We have reported that working relations between BIE and the Office of the Deputy Assistant Secretary for Management’s leadership are informal and sporadic, and BIE officials noted having difficulty obtaining timely updates from the Office of the Deputy Assistant Secretary for Management on its responses to requests for services from schools. In addition, there is a lack of communication between Indian Affairs’ leadership and schools. BIE and school officials in all four states we visited reported that they were unable to obtain definitive answers to policy or administrative questions from BIE’s leadership in Washington, For example, school officials in one state D.C. and Albuquerque, NM.we visited reported that they requested information from BIE’s Albuquerque office in the 2012-13 school year about the amount of Individuals with Disabilities Education Act funds they were to receive. The Albuquerque office subsequently provided them three different dollar amounts. The school officials were eventually able to obtain the correct amount of funding from their local BIE office. Similarly, BIE and school officials in three states reported that they often do not receive responses from BIE’s Washington, D.C. and Albuquerque offices to questions they pose via e-mail or phone. Further, one BIE official stated that meetings with BIE leadership are venues for conveying information from management to the field, rather than opportunities for a two-way dialogue. We testified recently that poor communication has also led to confusion among some BIE schools about the roles and responsibilities of the various Indian Affairs’ offices responsible for facility issues. For example, the offices involved in facility matters continue to change, due partly to two re-organizations of BIE, BIA, and the Office of the Deputy Assistant Secretary for Management over the past 2 years. BIE and tribal officials at some schools we visited said they were unclear about what office they should contact about facility problems or to elevate problems that are not addressed. At one school we visited, a BIE school facility manager submitted a request in February 2014 to replace a water heater so that students and staff would have hot water in the elementary school. However, the school did not designate this repair as an emergency. Therefore, BIA facility officials told us that they were not aware of this request until we brought it to their attention during our site visit in December 2014. Even after we did so, it took BIE and BIA officials over a month to approve the purchase of a new water heater, which cost about $7,500. As a result, students and staff at the elementary school went without hot water for about a year. We have observed difficulties in providing support for the most basic communications, such as the availability of up-to-date contact information for BIE and its schools. For example, BIE schools and BIA regions use an outdated national directory with contact information for BIE and school officials, which was last published in 2011. This may impair communications, especially given significant turnover of BIE and school staff. It may also hamper the ability of schools and BIA officials to share timely information with one another about funding and repair priorities. In one BIA region we visited, officials have experienced difficulty reaching certain schools by email and sometimes rely on sending messages by fax to obtain schools’ priorities for repairs. This situation is inconsistent with federal internal control standards that call for effective internal communication throughout an agency. In 2013, we recommended that Interior develop a communication strategy for BIE to update its schools and key stakeholders of critical developments. We also recommended that Interior include a communication strategy—as part of an overall strategic plan for BIE—to improve communication within Indian Affairs and between Indian Affairs and BIE staff. Indian Affairs agreed to these two recommendations and recently reported taking some steps to address them. However, it did not provide us with documentation that shows it has fully implemented the recommendations. Limited staff capacity poses another challenge to addressing BIE school needs. According to key principles of strategic workforce planning, the appropriate geographic and organizational deployment of employees can further support organizational goals and strategies and enable an organization to have the right people with the right skills in the right place. In 2013 we reported that staffing levels at BIA regional offices were not adjusted to meet the needs of BIE schools in regions with varying numbers of schools, ranging from 2 to 65. Therefore, we noted that it is important to ensure that each BIA regional office has an appropriate number of staff who are familiar with education laws and regulations and school-related needs to support the BIE schools in its region. Consequently, in 2013 we recommended that Indian Affairs revise its strategic workforce plan to ensure that its employees providing administrative support to BIE have the requisite knowledge and skills to help BIE achieve its mission and are placed in the appropriate offices to ensure that regions with a large number of schools have sufficient support. Indian Affairs agreed to implement the recommendation but has not yet done so. BIA regional offices also have limited staff capacity for addressing BIE school facility needs due to steady declines in staffing levels for over a decade, gaps in technical expertise, and limited institutional knowledge. For example, our preliminary analysis of Indian Affairs data shows that about 40 percent of BIA regional facility positions are currently vacant, including regional facility managers, architects, and engineers who typically serve as project managers for school construction and provide technical expertise. Our work and other studies have cited the lack of capacity of Indian Affairs’ facility staff as a longstanding agency challenge. Further, officials at several schools we visited said they face similar staff capacity challenges. For example, at one elementary school we visited, the number of maintenance employees has decreased over the past decade from six employees to one full-time employee and a part- time assistant, according to school officials. As a result of the staffing declines, school officials said that facility maintenance staff may sometimes defer needed maintenance. Within BIE, we also found limited staff capacity in another area of school operations—oversight of school expenditures. As we reported in November 2014, the number of key local BIE officials monitoring these expenditures had decreased from 22 in 2011 to 13, due partly to budget cuts. These officials had many additional responsibilities for BIE schools similar to school district superintendents of public schools, such as providing academic guidance. As a result, the remaining 13 officials had an increased workload, making it challenging for them to effectively oversee schools. For example, we found that one BIE official in North Dakota was also serving in an acting capacity for an office in Tennessee and was responsible for overseeing and providing technical assistance to schools in five other states—Florida, Louisiana, Maine, Mississippi, and North Carolina. Further, we reported that the challenges BIE officials confront in overseeing school expenditures are exacerbated by a lack of financial expertise and training. For example, although key local BIE officials are responsible for making important decisions about annual audit findings, such as whether school funds are being spent appropriately, they are not auditors or accountants. Additionally, as we reported in November 2014, some of these BIE officials had not received recent training on financial oversight. Without adequate staff and training, we reported that BIE will continue struggling to adequately monitor school expenses. Consequently, we recommended in 2014 that Indian Affairs develop a comprehensive workforce plan to ensure that BIE has an adequate number of staff with the requisite knowledge and skills to effectively oversee BIE school expenditures. Indian Affairs agreed with our recommendation but has not yet taken any action. Our work has shown that another management challenge, inconsistent accountability, hinders Indian Affairs in the areas of (1) managing school construction and (2) monitoring overall school expenditures. Specifically, this challenge hinders its ability to ensure that Indian students receive a quality education in a safe environment that is conducive to learning. In our February 2015 testimony on BIE school facilities, we reported that Indian Affairs had not provided consistent accountability on some recent school construction projects. According to agency and school officials we interviewed, some recent construction projects, including new roofs and buildings, went relatively well, while others faced numerous problems. The problems we found with construction projects at some schools suggest that Indian Affairs is not fully or consistently using management practices to ensure contractors perform as intended. For example, officials at three schools said they encountered leaks with roofs installed within the past 11 years. At one BIE-operated school we visited, Indian Affairs managed a project in which a contractor completed a $3.5 million project to replace roofs in 2010, but the roofs have leaked since their installation, according to agency documents. These leaks have led to mold in some classrooms and numerous ceiling tiles having to be removed throughout the school. (See fig. 5.) In 2011 this issue was elevated to a senior official within Indian Affairs, who was responsible for facilities and construction. He stated that the situation was unacceptable and called for more forceful action by the agency. Despite numerous subsequent repairs of these roofs, school officials and regional Indian Affairs officials told us in late 2014 that the leaks and damage to the structure continue. They also said that they were not sure what further steps, if any, Indian Affairs would take to resolve the leaks or hold the contractors or suppliers accountable, such as filing legal claims against the contractor or supplier if appropriate. In South Dakota, a school we visited recently encountered problems constructing a $1.5 million building for bus maintenance and storage using federal funds. According to Indian Affairs and school officials, although the project was nearly finished at the time of our visit in December 2014, Indian Affairs, the school, and the contractor still had not resolved various issues, including drainage and heating problems. Further, part of the new building for bus maintenance has one hydraulic lift, but the size of the building does not allow a large school bus to fit on the lift when the exterior door is closed because the building is not long enough. Thus, staff using the lift would need to maintain or repair a large bus with the door open, which is not practical in the cold South Dakota winters. (See fig. 6.) According to Indian Affairs officials, part of the difficulty with this federally- funded project resulted from the school’s use of a contractor responsible for both the design and construction of the project, which limited Indian Affairs’ ability to oversee it. Indian Affairs officials said that this arrangement, known as “design-build,” may sometimes have advantages, such as faster project completion times, but may also give greater discretion to the contractor responsible for both the design and construction of the building. For example, Indian Affairs initially raised questions about the size of the building to store and maintain buses. However, agency officials noted that the contractor was not required to incorporate Indian Affairs’ comments on the building’s design or obtain its approval for the project’s design, partly because Indian Affairs’ policy does not appear to address approval of the design in a “design-build” project. Further, neither the school nor Indian Affairs used particular financial incentives to ensure satisfactory performance by the contractor. Specifically, the school already paid the firm nearly the full amount of the project before final completion, according to school officials, leaving it little financial leverage over the contractor. We will continue to monitor such issues as we complete our ongoing work on BIE school facilities and consider any recommendations that may be needed to address these issues. In our 2014 report on BIE school spending, we found that BIE’s oversight did not ensure that school funds were spent appropriately on educational services, although external auditors had determined that there were serious financial management issues at some schools. Specifically, auditors identified $13.8 million in unallowable spending by 24 BIE schools as of July 2014. Additionally, in one case, an annual audit found that a school lost about $1.2 million in federal funds that were illegally transferred to an offshore bank account.accumulated at least another $6 million in federal funds in a U.S. bank account. As of June 2014, BIE had not determined how the school accrued that much in unspent federal funds. Further, instead of using a risk-based approach to its monitoring efforts, BIE indicated that it relies primarily on ad hoc suggestions by staff regarding which schools to target for greater oversight. For example, BIE failed to increase its oversight of expenditures at one school where auditors found that the school’s financial statements had to be adjusted by about $1.9 million. The same auditors also found unreliable accounting of federal funds during a 3-year period we reviewed. We recommended that Indian Affairs develop a risk-based approach to oversee school expenditures to focus BIE’s monitoring activities on schools that auditors have found to be at the greatest risk of misusing federal funds. While Indian Affairs agreed, it has not yet implemented this recommendation. In addition, we found that BIE did not use written procedures to monitor schools’ use of Indian School Equalization Program funds, which accounted for almost half of their total operating funding in fiscal year 2014. In 2014 we recommended that Indian Affairs develop written procedures, including for Interior’s Indian School Equalization Program, to consistently document their monitoring activities and actions they have taken to resolve financial weaknesses identified at schools. While Indian Affairs generally agreed, it has not yet taken this action. Without a risk- based approach and written procedures to overseeing school spending— both integral to federal internal control standards—there is little assurance that federal funds are being used for their intended purpose to provide BIE students with needed instructional and other educational services. In conclusion, Indian Affairs has been hampered by systemic management challenges related to BIE’s programs and operations that undermine its mission to provide Indian students with quality education opportunities and safe environments that are conducive to learning. In light of these management challenges, we have recommended several improvements to Indian Affairs on its management of BIE schools. While Indian Affairs has generally agreed with these recommendations and reported taking some steps to address them, it has not yet fully implemented them. Unless steps are promptly taken to address these challenges to Indian education, it will be difficult for Indian Affairs to ensure the long-term success of a generation of students. We will continue to monitor these issues as we complete our ongoing work and consider any additional recommendations that may be needed to address these issues. Chairman Barrasso, Vice Chairman Tester, and Members of the Committee, this concludes my prepared statement. I will be pleased to answer any questions that you may have. For future contact regarding this testimony, please contact Melissa Emrey-Arras at (617) 788-0534 or emreyarrasm@gao.gov. Key contributors to this testimony were Elizabeth Sirois (Assistant Director), Edward Bodine, Matthew Saradjian, and Ashanta Williams. Also, providing legal or technical assistance were James Bennett, David Chrisinger, Jean McSween, Jon Melhus, Sheila McCoy, and James Rebbe. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
BIE is responsible for providing quality education opportunities to Indian students. It currently oversees 185 schools, serving about 41,000 students on or near Indian reservations. Poor student outcomes raise questions about how well BIE is achieving its mission. In September 2013, GAO reported that BIE student performance has been consistently below that of Indian students in public schools. This testimony discusses Indian Affairs' management challenges in improving Indian education, including (1) its administration of schools, (2) staff capacity to address schools' needs, and (3) accountability for managing school construction and monitoring school spending. This testimony is based on GAO reports issued in September 2013 and November 2014, as well as GAO's February 2015 testimony, which presents preliminary results from its ongoing review of BIE school facilities. A full report on school facilities will be issued later this year. GAO reviewed relevant federal laws and regulations; analyzed agency data; and conducted site visits to schools, which were selected based on their geographic diversity and other factors. GAO has made several recommendations in its earlier reports; it is not making any new recommendations in this statement. GAO has reported for several years on how systemic management challenges within the Department of the Interior's Office of the Assistant Secretary–Indian Affairs (Indian Affairs) continue to hamper efforts to improve Bureau of Indian Education (BIE) schools. Over the past 10 years, Indian Affairs has undergone several organizational realignments, resulting in multiple offices across different units being responsible for BIE schools' education and administrative functions. Indian Affairs' fragmented organization has been compounded by frequent turnover in its leadership over a 13-year period and its lack of a strategic plan for BIE. Further, fragmentation and poor communication among Indian Affairs offices has led to confusion among schools about whom to contact about problems, as well as delays in the delivery of key educational services and supplies, such as textbooks. Key practices for organizational change suggest that agencies develop a results-oriented framework, such as a strategic plan, to clearly establish and communicate performance goals and measure their progress toward them. In 2013, GAO recommended that Interior develop a strategic plan for BIE and a strategy for communicating with schools, among other recommendations. Indian Affairs agreed with and reported taking some steps to address the two recommendations. However, it has not fully implemented them. Limited staff capacity poses another challenge to addressing BIE school needs. According to key principles for effective workforce planning, the appropriate deployment of employees enables organizations to have the right people, with the right skills, in the right places. However, Indian Affairs data indicate that about 40 percent of its regional facility positions, such as architects and engineers, are vacant. Similarly, in 2014 GAO reported that BIE had many vacancies in positions to oversee school spending. Further, remaining staff had limited financial expertise and training. Without adequate staff and training, Indian Affairs will continue to struggle in monitoring and supporting schools. GAO recommended that Interior revise its workforce plan so that employees are placed in the appropriate offices and have the requisite knowledge and skills to better support schools. Although Indian Affairs agreed with this recommendation, it has not yet implemented it. Inconsistent accountability hampers management of BIE school construction and monitoring of school spending. Specifically, GAO has found that Indian Affairs did not consistently oversee some construction projects. For example, at one school GAO visited, Indian Affairs spent $3.5 million to replace multiple roofs in 2010. The new roofs have leaked since their installation, causing mold and ceiling damage, and Indian Affairs has not yet adequately addressed the problems, resulting in continued leaks and damage to the structure. Inconsistent accountability also impairs BIE's monitoring of school spending. In 2014 GAO found that BIE does not adequately monitor school expenditures using written procedures or a risk-based monitoring approach, contrary to federal internal control standards. As a result, BIE failed to provide effective oversight of schools when they misspent millions of dollars in federal funds. GAO recommended that the agency develop written procedures and a risk-based approach to improve its monitoring. Indian Affairs agreed but has yet to implement these recommendations.
Mr. Chairman, there is a continuing and heightened need for better and more effective and comprehensive information sharing. We agree the intelligence community needs to move from a culture of “need to know” to “need to share.” The 9/11 Commission has made observations regarding information sharing, and recommended procedures to provide incentives for sharing and creating a “trusted information network.” Many Commission recommendations address the need to improve information and intelligence collection, sharing, and analysis within the intelligence community itself. In addition, we must not lose sight of the fact that the purpose of improving information analysis and sharing is to provide better information throughout the federal government, and ultimately also to state and local governments, the private sector, and our citizens, so that collectively we are all better prepared. I want to make it clear that such information sharing must protect confidential sources and methods, and we do not propose any changes that would infringe upon those protections. In addition, as the Congress considers the Commission’s recommendations, I would also recommend that it consider the role that state and local agencies and the private sector should play as informed partners in homeland security. The Commission’s work, as is the case with our own observations, notes the changing perspective of “federal” versus “other entities’” roles in homeland security and homeland defense. In performing its constitutional role of providing for the common defense, we have observed that the federal government must prevent and deter terrorist attacks on our homeland as well as detect impending danger before attacks occurs. Although it may be impossible to detect, prevent, or deter every attack, steps can and must be taken to reduce the risk posed by the threats to homeland security. Furthermore, in order to be successful in this area, the federal government must partner with a variety of organizations, both domestic and international. Traditionally, protecting the homeland against threats was generally considered a federal responsibility. To meet this responsibility, the federal government (within and across federal agencies) gathers intelligence, which is often classified as national security information. This information is protected and safeguarded to prevent unauthorized access by requiring appropriate security clearances and a “need to know.” Normally, the federal government did not share national-level intelligence with states and cities, since they were not viewed as having a significant role in preventing terrorism. Therefore, the federal government did not generally grant state and city officials access to classified information. After the September 11 attacks, however, the view that states and cities do not have a significant role in homeland security changed, and the “need to share” intelligence information became clear. However, reconciling the need to share with actually sharing has been at the heart of the 9/11 Commission’s recommendations and our own findings and observations on practices to improve information sharing. In work begun before the September 11 attacks, we reported on information- sharing practices of organizations that successfully share sensitive or time- critical information. We found that these practices include: establishing trust relationships with a wide variety of federal and nonfederal entities that may be in a position to provide potentially useful information and advice on vulnerabilities and incidents, developing standards and agreements on how shared information will be used and protected, establishing effective and appropriately secure communications taking steps to ensure that sensitive information is not inappropriately disseminated. As you might recall, we also testified before this committee last year on information sharing. GAO has made numerous recommendations related to sharing, particularly as they relate to fulfilling DHS’s critical infrastructure protection responsibilities. The Homeland Security Information Sharing Act, included in the Homeland Security Act of 2002 (P.L. 107-296), requires the President to prescribe and implement procedures for facilitating homeland security information sharing and establishes authorities to share different types of information, such as grand jury information; electronic, wire, and oral interception information; and foreign intelligence information. In July 2003, the President assigned these functions to the Secretary of Homeland Security, but no deadline was established for developing such information sharing procedures.. To accomplish its missions, DHS must gain access to, receive, and analyze law enforcement information, intelligence information, and other threat, incident, and vulnerability information from federal and nonfederal sources, and it must analyze such information to identify and assess the nature and scope of terrorist threats. DHS must also share information both internally and externally with agencies and law enforcement on such things as goods and passengers inbound to the United States and individuals who are known or suspected terrorists and criminals (e.g., watch lists). As we reported in June 2002, the federal government had made progress in developing a framework to support a more unified effort to secure the homeland, including information sharing. However, this work found additional needs and opportunities to enhance the effectiveness of information sharing among federal agencies with homeland security or homeland defense responsibilities, and with various state and city law enforcement agencies that have a key role in homeland security, as well as with the private sector. As we reported in August 2003, efforts to improve intelligence and information sharing still needed to be strengthened. Intelligence- and information- sharing initiatives implemented by states and cities were not effectively coordinated with those of federal agencies, nor were they coordinated within and between federal entities. Furthermore, neither federal, state, nor city governments considered the information-sharing process to be effective. For example, information on threats, methods, and techniques of terrorists was not routinely shared; information that was shared was not perceived as timely, accurate, or relevant; and federal officials have not established comprehensive processes or procedures to promote effective information sharing. At that time, we recommended that the Secretary of Homeland Security work with the heads of other federal agencies and state and local authorities to: incorporate the existing information-sharing guidance that is contained in the various national strategies and information-sharing procedures required by the Homeland Security Act, establish a clearinghouse to coordinate the various information-sharing initiatives to eliminate possible confusion and duplication of effort, fully integrate states and cities into the national policy-making process for information sharing and take steps to provide greater assurance that actions at all levels of government are mutually reinforcing, identify and address the perceived barriers to federal information sharing, and use a survey method or a related data collection approach to determine, over time, the needs of private and public organizations for information related to homeland security and to measure progress in improving information sharing at all levels of government. DHS concurred with the above recommendations. DHS and other federal agencies have instituted major counterterrorism efforts involving information and intelligence sharing over the past 2 years. For example, the Terrorist Threat Integration Center (T-TIC) was designed to improve the collection, analysis, and sharing of all counterterrorism intelligence gathered in the United States and overseas. The DHS Information Analysis and Infrastructure Protection (IAIP) Directorate is intended to receive intelligence from a variety of federal sources and act as a central fusion point for all intelligence relevant to homeland security and related critical infrastructure protection. Furthermore, the FBI has created a new Office of Intelligence, established a National Joint Terrorism Taskforce, expanded its Joint Terrorist Task Forces (JTTFs), and recently made operational an interagency joint Terrorist Screening Center. Although improvements had been made, we continue to identify needs, such as developing a comprehensive and coordinated national plan to facilitate information-sharing on critical infrastructure protection (CIP); developing productive information sharing relationships among the federal government and state and local governments and the private sector; and providing appropriate incentives for nonfederal entities to increase information sharing with the federal government and enhance other critical infrastructure protection efforts. As we recently reported, information sharing and analysis centers (ISACs) have identified a number of challenges to effective CIP information sharing between the federal government and state and local governments and the private sector, including sharing information on physical and cyber threats, vulnerabilities, incidents, potential protective measures, and best practices. Such challenges include building trusted relationships; developing processes to facilitate information sharing; overcoming barriers to information sharing; clarifying the roles and responsibilities of the various government and private sector entities that are involved in protecting critical infrastructure; and funding ISAC operations and activities. Although DHS has taken a number of actions to implement the public/private partnership called for by federal CIP policy, it has not yet developed a plan that describes how it will carry out its information- sharing responsibilities and relationships, including consideration of appropriate incentives for nonfederal entities to increase information sharing with the federal government, increase sector participation, and perform other specific tasks to protect the critical infrastructure. Such a plan could encourage improved information sharing among the ISACs, other CIP entities, and the department by clarifying the roles and responsibilities of all the entities involved and clearly articulating actions to address the challenges that remain. The department also lacks policies and procedures to ensure effective coordination and sharing of ISAC-provided information among the appropriate components within the department. Developing such policies and procedures would help ensure that information is appropriately shared among its components and with other government and private sector CIP entities. GAO recommended that the Secretary of Homeland Security direct officials within DHS to (1) proceed with the development of an information-sharing plan that describes the roles and responsibilities of DHS, the ISACs, and other entities and (2) establish appropriate department policies and procedures for interactions with other CIP entities and for coordination and information sharing among DHS components. DHS has generally agreed with our findings and recommendations. DHS has also implemented the Homeland Security Advisory System. Utilizing five color-coded threat levels, the system was established in March 2002 to disseminate information regarding the risk of terrorist acts to federal agencies, states and localities, and the public. Our recent work indicates that DHS has not yet officially documented communication protocols for providing threat information and guidance to federal agencies and states, with the result that some federal agencies and states may first learn about changes in the national threat level from media sources. Moreover, federal agencies and states responding to our inquiries indicated that they generally did not receive specific threat information and guidance, and they believed this shortcoming hindered their ability to determine whether they were at risk as well as their ability to determine and implement appropriate protective measures. In addition, there is a need for an improved security clearance process so that state, local, and private sector officials have the access to information they need, but with appropriate security safeguards in place, while efforts to improve information sharing continue. In a recent report, we described the FBI’s process for granting access to classified information for state and local law enforcement officials. The FBI’s goal is to complete the processing for secret security clearances within 45 to 60 days and top secret security clearances within 6 to 9 months. While the FBI’s processing of top secret security clearances has been generally timely, that was not the case for secret clearances. However, the FBI made substantial improvements in 2003 to the timeliness of processing secret clearances. We also have conducted a body of work that has found that long-standing security clearance backlogs and delays in determining clearance eligibility affect industry personnel, military members, and federal employees. For example, as we reported in May of this year, more than 187,000 reinvestigations, new investigations, or clearance adjudications were not completed for industry personnel alone within established time frames. Delays in conducting investigations and determining clearance eligibility can increase national security risks, prevent industry personnel from beginning or continuing work on classified programs and activities, or otherwise hinder the sharing of classified threat information with officials having homeland security or homeland defense responsibilities. The FBI has also taken a number of steps to enhance its information sharing with state and local law enforcement officials, such as providing guidance and additional staffing. The FBI has further increased the number of its JTTFs, increasing them from 35 prior to the September 11 attacks to 84 as of July 2004 and state and local law enforcement officials’ participation on these task forces has been increased. The FBI has at least one JTTF in each of its 56 field locations and plans to expand to 100. The FBI also circulates declassified intelligence through a weekly bulletin and provides threat information to state and local law enforcement officials via various database networks. “There is a fascination in Washington with bureaucratic solutions—rearranging the wiring diagrams, creating new organizations. We do recommend some important institutional changes. We will articulate and defend those proposals. But we believe reorganizing governmental institutions is only a part of the agenda before us. Some of the saddest aspects of the 9/11 story are the outstanding efforts of so many individual officials straining, often without success, against the boundaries of the possible. Good people can overcome bad structures. They should not have to. We have the resources and the people. We need to combine them more effectively, to achieve unity of effort.” GAO agrees with this comment, and we have noted several related suggestions below. As the committee is aware, GAO has done extensive work on federal organizational structure and how reorganization can improve performance. The 9/11 Commission has recommended major changes to unify strategic intelligence and operational planning with a National Counterterrorism Center and provide the intelligence community with a new National Intelligence Director. As the Congress and the administration consider the 9/11 Commission’s recommendations, they should consider how best to address organizational changes, roles and responsibilities, and functions for intelligence-sharing effectiveness. In response to the emerging trends and long-term fiscal challenges the government faces in the coming years, we have an opportunity to create highly effective, performance-based organizations that can strengthen the nation’s ability to meet the challenges of the twenty first century and reach beyond our current level of achievement. The federal government cannot accept the status quo as a given—we need to reexamine the base of government policies, programs, structures, and operations. We need to minimize the number of layers and silos in government, emphasize horizontal versus vertical actions, while moving our policy focus to coordination and integration. The result, we believe, will be a government that is effective and relevant to a changing society—a government that is as free as possible of outmoded commitments and operations that can inappropriately encumber the future, reduce our fiscal flexibility, and prevent future generations from being able to make choices regarding what roles they think government should play. Many departments and agencies, including those of the intelligence community, were created in a different time and in response to challenges, threats, and priorities very different from today’s world. Some have achieved their one time missions and yet they are still in business. Many have accumulated responsibilities beyond their original purposes. Many are still focused on their original mission that may not be relevant or as high a priority in today’s world. Others have not been able to demonstrate how they are making a difference in real and concrete terms. Still others have overlapping or conflicting roles and responsibilities. Redundant, unfocused, uncoordinated, outdated, misaligned, and nonintegrated programs and activities waste scarce funds, confuse and frustrate program customers, and limit overall efficiency and effectiveness. These are the charges highlighted by the 9/11 Commission’s findings and recommendations. The problems the 9/11 Commission has described with our intelligence activities indicate a strong need for reexamining the organization and execution of those activities. However, any restructuring proposal requires careful consideration. Fixing the wrong problems or even worse, fixing the right problems poorly, could cause more harm than good. Past executive reorganization authority has served as an effective tool for achieving fundamental reorganization of federal operations. As I have testified before this committee, the granting of executive reorganization authority to the President can serve to better enable the President to propose government designs that would be more efficient and effective in meeting existing and emerging challenges involving the intelligence community and information sharing with other entities. However, lessons learned from prior federal reorganization efforts suggest that reorganizing government can be an immensely complex activity that requires consensus on both the goals to be achieved and the process for achieving them. Prior reorganization authority has reflected a changing balance between legislative and executive roles. Periodically, between 1932 and 1984, the Congress passed legislation providing the President one form or another of expedited reorganization authority. Congressional involvement is needed not just in the initial design of the reorganization, but in what can turn out to be a lengthy period of implementation. The Congress has an important role to play—in both its legislative and oversight capacities—in establishing, monitoring, and maintaining progress to attain the goals envisioned by government transformation and reorganization efforts. However, as the 9/11 Commission has noted, past oversight efforts in the intelligence area have been wholly inadequate. To ensure efficient and effective implementation and oversight, the Congress will also need to consider realigning its own structure. With changes in the executive branch, the Congress should adapt its own organization. For example, the Congress has undertaken a reexamination of its committee structure, with the implementation of DHS. The DHS legislation instructed both houses of Congress to review their committee structures in light of the reorganization of homeland security responsibilities within the executive branch. Similarly, the 9/11 Commission recommends realigning congressional oversight to support its proposals to reorganize intelligence programs. The 9/11 Commission stresses the need for stronger capabilities and expertise in intelligence and national security to support homeland security. For example, the Commission recommends rebuilding the Central Intelligence Agency’s analytical capabilities, enhancing the agency’s human intelligence capabilities, and developing a stronger language program. We believe, Mr. Chairman, that at the center of any serious change management initiative are the people involved—people define the organization’s culture, drive its performance, and embody its knowledge base. They are the source of all knowledge, process improvement, and technological enhancement efforts. As such, strategic human capital (or people) strategy is the critical element to maximizing government’s performance and ensuring accountability of our intelligence community and homeland security efforts. Experience shows that failure to adequately address—and often even consider—a wide variety of people and cultural issues is at the heart of unsuccessful organizational transformations. Recognizing the “people” element in these initiatives and implementing strategies to help individuals maximize their full potential in the new environment is the key to a successful transformation of the intelligence community and related homeland security organizations. Thus, organizational transformations that incorporate strategic human capital management approaches will help to sustain agency efforts and improve the efficiency, effectiveness, and accountability of the federal government. To help, we have identified a set of practices that have been found to be central to any successful transformation effort. Committed, sustained, highly qualified, and inspired leadership, and persistent attention by all key parties in the successful implementation of organizational transformations, will be essential, if lasting changes are to be made and the challenges we are discussing today are to be effectively addressed. It is clear that in a knowledge-based federal government, including the intelligence community, people—human capital—are the most valuable asset. How these people are organized, incented, enabled, empowered, and managed is key to the reform of the intelligence community and other organizations involved with homeland security. We have testified that federal human capital strategies are not yet appropriately constituted to meet current and emerging challenges or to drive the needed transformation across the federal government. The basic problem has been the long-standing lack of a consistent approach to marshaling, managing, and maintaining the human capital needed to maximize government performance and ensure its accountability to the people. Thus, federal agencies involved with the intelligence community and other homeland security organizations will need the most effective human capital systems to address these challenges and succeed in their transformation efforts during a period of sustained budget constraints. This includes aligning their strategic planning and key institutional performance with unit and individual performance management and reward systems. Fortunately, the Congress has passed legislation providing many of the authorities and tools agencies need. In fact, more progress in addressing human capital challenges was made in the last 3 years than in the last 20, and significant changes in how the federal workforce is managed are under way. For example, the Congress passed legislation providing governmentwide human capital flexibilities, such as direct hire authority, the ability to use category rating in the hiring of applicants instead of the “rule of three,” and the creation of chief human capital officer (CHCO) positions and the CHCO Council. In addition, individual agencies—such as the National Aeronautical and Space Administration (NASA), DoD, and DHS—received flexibilities intended to help them manage their human capital strategically to achieve results. While many agencies have received additional human capital flexibilities, additional ones may be both needed and appropriate for the intelligence, homeland security, national defense, and selected other agencies. While the above authorities are helpful, in order to enable agencies to rapidly meet their critical human capital needs, the Congress should consider legislation granting selected agency heads the authority to hire a limited number of positions for a stated period of time (e.g., up to 3 years) on a noncompetitive basis. The Congress has passed legislation granting this authority to the Comptroller General of the United States and it has helped GAO to address a range of critical needs in a timely, effective, and prudent manner over many years. Recent human capital actions have significant precedent-setting implications for the rest of government. They represent progress and opportunities, but also present legitimate concerns. We are fast approaching the point where “standard governmentwide” human capital policies and processes are neither standard nor governmentwide. As the Congress considers the need for additional human capital authorities for the intelligence community, it should keep in mind that human capital reform should avoid further fragmentation within the civil service, ensure reasonable consistency within the overall civilian workforce, and help maintain a reasonably level playing field among federal agencies in competing for talent. Importantly, this is not to delay needed reforms for any agency, but to accelerate reform across the federal government and incorporate appropriate principles and safeguards. As the Congress considers reforms to the intelligence communities’ human capital policies and practices, it should require that agencies have in place the institutional infrastructure needed to make effective use of any new tools and authorities. At a minimum, this institutional infrastructure includes a human capital planning process that integrates the agency’s human capital policies, strategies, and programs with its program goals and mission and desired outcomes; the capabilities to effectively develop and implement a new human capital system; and, importantly, a set of appropriate principles and safeguards, including reasonable transparency and appropriate accountability mechanisms, to ensure the fair, effective, credible, nondiscriminatory implementation and application of a new system. As Chairman Kean and Vice-Chairman Hamilton caution, organizational changes are just a part of the reforms needed. The Commission rightly says that effective public policies need concrete objectives, agencies need to be able to measure success, and the American people are entitled to see some standards for performance so they can judge, with the help of their elected representatives, whether the objectives are being met. To comprehensively transform government to improve intelligence and homeland security efforts, we must also carefully assess and define mission needs, current capabilities, resource practicalities, and priorities. And we must implement our plans to achieve those mission needs. The federal government is well short of where it needs to be in setting national homeland security goals, including those for intelligence and other mission areas, to focus on results—outcomes—not inputs and outputs which were so long a feature of much of the federal government’s strategic planning. We are concerned that the tenets of results management—shifting management attention from inputs, processes, and outputs to what is accomplished with them (outcomes or results)—still are elusive in homeland security goal setting and operational planning. We advocate a clear and comprehensive focus on homeland security results management, including the mission of intelligence and information sharing. Results management should have the elements to determine (1) if homeland security results are being achieved within planned timeframes, (2) if investments and resources are being managed properly, (3) if results are being integrated into ongoing decision making and priority setting, and (4) what action is needed to guide future investment policies and influence behavior to achieve results. These actions go far beyond a limited focus on organizational structure. As the Gilmore Commission stated, a continuing problem for homeland security has been the lack of clear strategic guidance from the federal level about the definition and objectives of preparedness and how states and localities will be evaluated in meeting those objectives. The 9/11 Commission’s broad recommendations, if adopted, will require a thoughtful, detailed, results-oriented management approach in defining specific goals, activities, and resource requirements. The track record for homeland security results management to date is spotty. The National Strategy for Homeland Security, issued by the administration in July 2002, was intended to mobilize and organize the nation to secure the homeland from terrorist attacks. Intelligence and warning was one of its critical mission areas. Despite the changes over the past two years, the National Strategy has not been updated. In general, initiatives identified in the strategy do not provide a baseline set of performance goals and measures upon which to assess and improve preparedness, stressing activities rather than results. For example, for intelligence and warning, the National Strategy identified major initiatives that are activities, such as implementing the Homeland Security Advisory System, utilizing dual-use analysis to prevent attacks; and employing “red team” techniques. Establishing clear goals and performance measures is critical to ensuring both a successful and a fiscally responsible and sustainable preparedness effort. We are currently doing work on the extent to which the National Strategy’s goals are being implemented by federal agencies. Senator Lieberman has recently introduced legislation requiring executive branch efforts to produce a national homeland security strategy. We support the concept of a legislatively required strategy that can be sustained across administrations and provides a framework for congressional oversight. Before the administration’s National Strategy for Homeland Security was issued, we had stated that the strategy should include steps designed to (a) reduce our vulnerability to threats; (b) use intelligence assets and other broad-based information sources to identify threats and share information as appropriate; (c) stop incidents before they occur; (d) manage the consequences of an incident; and (e) in the case of terrorist attacks, respond by all means available, including economic, diplomatic, and military actions that, when appropriate, are coordinated with other nations. Earlier this year we provided a set of desirable characteristics for any effective national strategy that could better focus national homeland security decision making and increase the emphasis on outcomes. Strategic planning is critical to provide mission clarity, establish long-term performance strategies and goals, direct resource decisions, and guide transformation efforts. In this context, we are reviewing the DHS strategic planning efforts. Our work includes a review of the manner by which the Department’s planning efforts support the National Strategy for Homeland Security and the extent to which its strategic plan reflects the requirements of the Government Performance and Results Act of 1993. DHS’s planning efforts are evolving. The current published DHS strategic plan contains vague strategic goals and objectives for all its mission areas, including intelligence, and little specific information to guide congressional decision making. For example, the strategic plan includes an overall goal to identify and understand threats, assess vulnerabilities, determine potential impacts, and disseminate timely information to DHS’s homeland security partners and the American public. That goal has very general objectives, such as gathering and fusing all terrorism-related intelligence and analyzing and coordinating access to information related to potential terrorist or other threats. Discussion of annual goals are missing, and supporting descriptions of means and strategies are vague, making it difficult to determine if they are sufficient to achieve the objectives and overall goals. These and related issues will need to be addressed as the DHS planning effort moves forward. In another effort to set expectations, the President, through Homeland Security Presidential Directive 8, has tasked the Department of Homeland Security with establishing measurable readiness priorities and targets appropriately balancing the potential threat and magnitude of terrorist attacks, major disasters, and other emergencies with resources required to prevent, respond to, and recover from them. The task also is to include readiness metrics and elements supporting the national preparedness goal, including standards for preparedness assessments and strategies, and a system for assessing the nation’s overall preparedness to respond to major events, especially involving acts of terrorism. However, those taskings have yet to be completed, but they will have to address the following questions: What are the appropriate national preparedness goals and measures? What are appropriate subgoals for specific areas such as critical infrastructure sectors? Do these goals and subgoals take into account other national goals such as economic security or the priority objectives of the private sector or other levels of government? Who should be accountable for achieving the national goals and subgoals? How would a national results management and measurement system be crafted, implemented, and sustained for the national preparedness goals? How would such a system affect needs assessment and be integrated with funding and budgeting processes across the many organizations involved in homeland security? However, even if we have a robust and viable national strategy for homeland security, DHS strategic plan, and national preparedness goals, the issue of implementation remains. Implementation cannot be assured, or corrective action taken, if we are not getting the results we want, without effective accountability and oversight. The focus for homeland security must be on constantly staying ready and prepared for unknown threats and paying attention to improving performance. In addition to continuing our ongoing work in major homeland security mission areas such as border and transportation security and emergency preparedness, GAO can help the Congress more effectively oversee the intelligence community, and any changes should consider, in our view, an appropriate role for the GAO. With some exceptions, GAO has broad-based authority to conduct reviews relating to various intelligence agencies. However, because of historical resistance from the intelligence agencies and the general lack of support from the intelligence committees in the Congress, GAO has done limited work in this community over the past 25 years. For example, within the past 2 years, we have done a considerable amount of work in connection with the FBI and its related transformational efforts. In addition, GAO has recently had some interaction with the Defense Intelligence Agency in connection with its transformation efforts. Furthermore, GAO has conducted extensive work on a wide range of government transformational and homeland security issues over the past several years. As always, we stand ready to offer GAO’s assistance in support of any of the Congress’ oversight needs. In conclusion, on the basis of GAO’s work in both the public and the private sector over many years, and my own change management experience, it is clear to me that many of the challenges that the intelligence community faces are similar or identical to the transformation challenges applicable to many other federal agencies, including GAO. Specifically, while the intelligence agencies are in a different line of business than other federal agencies, they face the same challenges when it comes to strategic planning and budgeting, organizational alignment, human capital strategy, and the management of information technology, finances, knowledge, and change. For the intelligence community, effectively addressing these basic business transformation challenges will require action relating to five key dimensions, namely, structure, people, process, technology, and partnerships. It will also require a rethinking and cultural transformation in connection with intelligence activities both in the executive branch and in the Congress. With regard to the structure dimension, there are many organizational units within the executive branch and in the Congress with responsibilities in the intelligence and homeland security areas. Basic organizational and management principles dictate that, absent a clear and compelling need for competition or checks and balances, there is a need to minimize the number of entities and levels in key decision making, oversight, and other related activities. In addition, irrespective of how many units and levels are involved, someone has to be in charge of all key planning, budgeting, and operational activities. One person should be responsible and accountable for all key intelligence activities within the executive branch, and that person should report directly to the President. This position must also have substantive strategic planning, budget, operational integration, and accountability responsibilities and opportunities for the intelligence community in order to be effective. In addition, this person should be appointed by the President and confirmed by the Senate in order to help facilitate success and ensure effective oversight. With regard to the oversight structure of the Congress, the 9/11 Commission noted that there are numerous players involved in intelligence activities and yet not enough effective oversight is being done. As a result, a restructuring of intelligence and homeland security related activities in the Congress is also needed. In this regard, it may make sense to separate responsibility for intelligence activities from personal privacy and individual liberty issues in order to ensure that needed attention is given to both while providing for a check and balance between these competing interests. With regard to the people dimension, any entity is only as good as its people, and as I stated earlier, the intelligence community is no exception. In fact, since the intelligence community is in the knowledge business, people are of vital importance. The people challenge starts at the top, and key leaders must be both effective and respected. In addition, they need to stay in their positions long enough to make a real and lasting difference. In this regard, while the FBI director has a 10-year term appointment, most agency heads serve at the pleasure of their appointing official and may serve a few years in their respective positions. This is a problem when the agency is in the need of a cultural transformation, such as that required in the intelligence community, which typically takes at least 5 to 7 years to effectuate. In addition to having the right people and the right “tone at the top,” agencies need to develop and execute workforce strategies and plans helping to ensure that they have the right people with the right skills in the required numbers to accomplish their missions. Many of these missions have changed in the post-Cold War and post September 11 world. This is especially critical in connection with certain skills that are in short supply, such as information technology and certain languages, such as Arabic. In addition, as the 9/11 Commission and others have noted, it is clear that additional steps are necessary to strengthen our human intelligence capabilities. With regard to the process and technology dimensions, steps need to be taken to streamline and expedite the processes used to analyze and disseminate the tremendous amount of intelligence and other information available to the intelligence community. This will require extensive use of technology to sort and distribute information both within agencies and between agencies and other key players in various sectors both domestically and internationally, as appropriate. The 9/11 Commission and others have noted various deficiencies in this area, such as the FBI’s information technology development and implementation challenges. At the same time, some successes have occurred during the past 2 years that address process and technology concerns. For example, the Terrorist Screening Center, created under Homeland Security Presidential Directive 6 is intended to help in the consolidation of the federal government’s approach to terrorism screening. This center has taken a number of steps to address various organizational, technological, integration, and other challenges, and it may serve as a model for other needed intra- and interorganizational efforts. With regard to partnerships, it has always been difficult to create an environment of shared responsibility, shared resources, and shared accountability for achieving difficult missions. Effective partnerships require a shared vision, shared goals, and shared trust in meeting agreed- upon responsibilities. Partnerships also mean that power is shared. Too often we have seen both public and private sector organizations where the term “partnership” is often voiced, but the reality is more a jockeying for dominance or control over the “partner.” The end result is that resources are not shared, the shared mission is never complete or adequate, and opportunities for true strategic alliance are squandered. In the intelligence arena, we know the potential end result is failure for the nation. With regard to the cultural dimension, this is both the softest and the hardest to deal with. By the softest, I mean it involves the attitudes and actions of people and entities. By the hardest, I mean that changing long- standing cultures can be a huge challenge, especially if the efforts involve organizational changes in order to streamline, integrate, and improve related capabilities and abilities. This includes both execution and oversight-related activities. As the 9/11 Commission and others have noted, such a restructuring is needed in both the executive branch and the Congress. This will involve taking on the vested interests of many powerful players, and as a result, it will not be easy, but it may be essential, especially if we expect to go from a “need to know” to a “need to share’ approach. As I have often said, addressing such issues takes patience, persistence, perspective, and pain before you prevail. Such is the case with many agency transformational efforts, including those within our own GAO. However, given the challenges and dangers that we face in the post 9/11 world, we cannot afford to wait much longer. The time for action is now. “There will never be an end point in America’s readiness. Enemies will change tactics, citizens’ attitudes about what adjustments in their lives they will be willing to accept will evolve and leaders will be confronted with legitimate competing priorities that will demand attention….In the end, America’s response to the threat of terrorism will be measured by how we manage risk. There will never be a 100% guarantee of security for our people, the economy, and our society. We must resist the urge to seek total security—it is not achievable and drains our attention from those things that can be accomplished.” Managing risk is not simply about putting new organizations in place. It requires us to think about what must be protected, define an acceptable level of risk, and target limited resources while keeping in mind that the related costs must be affordable and sustainable. Perhaps more important, managing risk requires us to constantly operate under conditions of uncertainty, where foresight, anticipation, responsiveness, and radical adaptation are vital capabilities. We can and we must enhance and integrate our intelligence efforts as suggested by the 9/11 Commission to significantly improve information sharing and analysis. Several models to achieve this result exist, and despite the unique missions of the intelligence community can readily be adapted to guide this transformation. We at the GAO stand ready to constructively engage with the intelligence community to share our significant government transformation and management knowledge and experience in order to help members of the community help themselves engage in the needed transformation efforts. We also stand ready to help the Congress enhance its oversight activities over the intelligence community, which, in our view, are an essential element of an effective transformation approach. In this regard, we have the people with the skills, experience, knowledge, and clearances to make a big difference for Congress and the country. Mr. Chairman, this concludes my statement. I would be happy to answer any questions that you or members of your committee may have at this time. For information on this testimony, please contact Randall Yim at (202) 512-6787 or yimr@gao.gov. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The sorrow, loss, anger, and resolve so evident immediately following the September 11, 2001, attacks have been combined in an effort to help assure that our country will never again be caught unprepared. As the 9/11 Commission notes, we are safer today but we are not safe, and much work remains. Although in today's world we can never be 100 percent secure, and we can never do everything everywhere, we concur with the Commission's conclusion that the American people should expect their government to do its very best. GAO's mission is to help the Congress improve the performance and ensure the accountability of the federal government for the benefit of the American people. GAO has been actively involved in improving government's performance in the critically important homeland security area both before and after the September 11 attacks. In its request, the House Committee on Government Reform have asked GAO to address two issues: the lack of effective information sharing and analysis and the need for executive branch reorganization in response to the 9/11 Commission recommendations. Further, the Committee has asked GAO to address how to remedy problems in information sharing and analysis by transforming the intelligence community from a system of "need to know" to one of a "need to share." The 9/11 Commission has recommended several transformational changes, such as the establishment of a National Counterterrorism Center (NCTC) for joint operational planning and joint intelligence and replacing the current Director of Central Intelligence with a National Intelligence Director (NID) to oversee national intelligence centers across the federal government. The NID would manage the national intelligence program and oversee agencies that contribute to it. On August 2, 2004, the President asked Congress to create a NID position to be the principal intelligence advisor, appointed by the President, with the advice and consent of the Senate and serving at the pleasure of the President. Unlike the 9/11 Commission, the President did not propose that the NID be within the Executive Office of the President. He also announced that he will establish a NCTC whose Director would report to the NID, and that this center would build upon the analytic work of the existing Terrorist Threat Integration Center. He suggested that a separate center may be necessary for issues of weapons of mass destruction. Finally, he endorsed the 9/11 Commission's call for reorganization of the Congressional oversight structure. There are, however, several substantive differences between the President's proposal and the Commission's recommendations. While praising the work of the 9/11 Commission, and endorsing several of its major recommendations in concept, the President differed with the Commission on certain issues. These differences reflect that reasoned and reasonable individuals may differ, and that several methods may exist to effectuate the transformational changes recommended. However, certain common principles and factors outlined in this statement today should help guide the debate ahead. Although the creation of a NID and a NCTC would be major changes for the intelligence community, other structural and management changes have occurred and are continuing to occur in government that provide lessons for the intelligence community transformation. While the intelligence community has historically been addressed separately from the remainder of the federal government, and while it undoubtedly performs some unique missions that present unique issues, its major transformational challenges in large measure are the same as those that face most government agencies. As a result, GAO's findings, recommendations, and experience in reshaping the federal government to meet Twenty-First Century challenges will be directly relevant to the intelligence community and the recommendations proposed by the 9/11 Commission. The goal of improving information sharing and analysis with a focus upon the needs of the consumers of such improved information for specific types of threats can provide one of the powerful guiding principles necessary for successful transformation. This testimony covers four major points. First, it describes the rationale for improving effective information sharing and analysis, and suggest some ways to achieve positive results. Second, it provides some overview perspectives on reorganizational approaches to improve performance and note necessary cautions. Third, it illustrates that strategic human capital management must be the centerpiece of any serious change management initiative or any effort to transform the cultures of government agencies, including that of the intelligence community. Finally, it emphasizes the importance of results-oriented strategic planning and implementation for the intelligence arena, focusing management attention on outcomes, not outputs, and the need for effective accountability and oversight to maintain focus upon improving performance. It concludes by applying these concepts and principles to the challenges of reform in the intelligence community.
The U.S. government maintains more than 250 diplomatic posts overseas (embassies, consulates, and other diplomatic offices) with approximately 60,000 personnel representing more than 50 government agencies and subagencies. The departments of Defense and State together comprise more than two-thirds of American personnel overseas under chiefs of mission authority—36 percent and 35 percent, respectively. The costs of maintaining staff overseas vary by agency but in general, as OMB has reported, they are high. The Deputy Director of OMB recently testified that the average annual cost of having one full-time direct-hire American family of four in a U.S. embassy is $339,100. Following the 1998 embassy bombings, two high-level independent groups called for the reassessment of overseas staffing levels. The Accountability Review Boards that sent two teams to the region to investigate the bombings concluded that the United States should consider adjusting the size of its overseas presence to reduce security vulnerabilities. Following the Accountability Review Boards’ report, OPAP concluded that some embassies were disproportionately sized and needed staff adjustments to adapt to new foreign policy priorities and reduce security vulnerabilities. The panel recommended creating a permanent interagency committee to develop a methodology to determine the appropriate size and locations of the U.S. overseas presence. OPAP also suggested a series of actions to adjust overseas presence, including relocating some functions to the United States and to regional centers where feasible. However, the State- led interagency committee that was established to respond to OPAP’s recommendations did not produce a standard rightsizing methodology. As we previously reported, the committee did not spend sufficient time at overseas locations to fully assess workload issues or consider alternative ways of doing business. To move the issue forward, in August 2001, the President’s Management Agenda identified rightsizing as one of the administration’s priorities. In addition, the President’s fiscal year 2003 international affairs budget (1) highlighted the importance of making staffing decisions on the basis of mission priorities and costs and (2) directed OMB to analyze agencies’ overseas staffing and operating costs (see app. I for a summary of previous rightsizing initiatives). Although there is general agreement on the need for rightsizing the U.S. overseas presence, there is no consensus on how to do it. As a first step, we developed a framework that includes a set of questions to guide decisions on overseas staffing (see app. II for the set of questions). We identified three critical elements that should be systematically evaluated as part of this framework: (1) physical/technical security of facilities and employees, (2) mission priorities and requirements, and (3) cost of operations. If the evaluation shows problems, such as security risks, decision makers should then consider the feasibility of rightsizing options, including relocating staff or downsizing. On the other hand, evaluations of agencies’ priorities may indicate a need for additional staff at embassies or greater external support from other locations. Figure 1 illustrates the framework’s elements and options. State and other agencies in Washington, D.C., including OMB, could use this framework as a guide for making overseas staffing decisions. For example, ambassadors could use this framework to ensure that embassy staffing is in line with security concerns, mission priorities and requirements, and cost of operations. At the governmentwide level, State and other agencies could apply the framework to free up resources at oversized posts, reallocate limited staffing resources worldwide, and introduce greater accountability into the staffing process. The following sections describe in more detail the three elements of our framework, examples of key questions to consider for each element, and potential rightsizing options. We also include examples of how the questions in the framework were useful for examining rightsizing issues at the U.S. embassy in Paris. The substantial loss of life caused by the bombings of the U.S. embassies in Africa and the ongoing threats against U.S. diplomatic buildings have heightened concern about the safety of our overseas personnel. State has determined that about 80 percent of embassy and consulate buildings do not fully meet security standards. Although State has a multibillion-dollar plan under way to address security deficiencies around the world, security enhancements cannot bring most existing facilities in line with the desired setback—the distance from public thoroughfares—and related blast protection requirements. Recurring threats to embassies and consulates highlight the importance of rightsizing as a tool to minimize the number of embassy employees at risk. The Accountability Review Boards recommended that the Secretary of State review the security of embassies and consider security in making staffing decisions. We agree that the ability to protect personnel should be a key factor in determining embassy staffing levels. State has prepared a threat assessment and security profile for each embassy, which can be used when assessing staff levels. While chiefs of mission and State have primary responsibility for assessing overseas security needs and allocating security resources, all agencies should consider the risks associated with maintaining staff overseas. The Paris embassy, our case study, illustrates the importance of facility security in determining staffing levels. As at many posts, the facilities in Paris predate current security standards. The Department of State continues to mitigate security limitations by using a variety of physical and technical security countermeasures. That said, none of the embassy’s office buildings meets current standards. The placement and composition of staff overseas must reflect the highest priority goals of U.S. foreign policy. Moreover, the President’s Management Agenda states that U.S. interests are best served by ensuring that the federal government has the right number of people at the right locations overseas. Currently, there is no clear basis on which to evaluate an embassy’s mission and priorities relative to U.S. foreign policy goals. State’s fiscal year 2000-2002 Mission Performance Plan (MPP) process does not require embassies to differentiate among the relative importance of U.S. strategic goals. The Chairman of OPAP testified in May 2002 that no adequate system exists to match the size and composition of the U.S. presence in a given country to the embassy’s priorities. Currently it is difficult to assess whether 700 people are needed at the Paris embassy. For example, the fiscal year 2000-2002 MPP includes 15 of State’s 16 strategic goals, and overall priorities are neither identified nor systematically linked to resources. In recent months, State has revised the MPP process to require each embassy to set five top priorities and link staffing and budgetary requirements to fulfilling these priorities. A successful delineation of mission priorities will complement our rightsizing framework and support future rightsizing efforts to adjust the composition of embassy staff. Embassy workload requirements include influencing policy of other governments, assisting Americans abroad, articulating U.S. policy, handling official visitors, and providing input for various reports and requests from Washington. In 2000, on the basis of a review of six different U.S. embassies, the State-led interagency committee found the perception that Washington’s requirements for reports and other information requests were not prioritized and placed unrealistic demands on staff. We also found this same perception among some offices in Paris. Scrutiny of workload requirements could potentially identify work of low priority such as reporting that has outlived its usefulness. Currently, State monitors and sends incoming requests for reports and inquiries to embassies and consulates, but it rarely refuses requests and leaves the prioritization of workload to the respective embassies and consulates. Washington’s demands on an embassy need to be evaluated in light of how they affect other work requirements and the number of staff needed to meet these requirements. For example, the economics section in Paris reported that Washington-generated requests resulted in missed opportunities for assessing how U.S. private and government interests are affected by the many ongoing changes in the European banking system. The President’s Management Agenda states that there is no mechanism to assess the overall rationale for and effectiveness of where and how many U.S. employees are deployed overseas. Each agency in Washington has its own criteria for assigning staff to U.S. embassies. Some agencies have more flexibility than others in placing staff overseas, and Congress mandates the presence of others. Thorough staffing criteria are useful for determining and reassessing staffing levels and would allow agencies to better justify the number of overseas staff. We found that the criteria to locate staff in Paris vary significantly by agency. Some agencies use detailed staffing models, but most do not. Furthermore, they do not fully consider embassy priorities or the overall workload requirements on the embassy in determining where and how many staff are necessary. Some agencies are entirely focused on the host country, while others have regional responsibilities or function almost entirely outside the country in which they are located. Some agencies have constant interaction with the public, while others require interaction with their government counterparts. Some agencies collaborate with other agencies to support the embassy’s mission, while others act more independently and report directly to Washington. Analyzing where and how agencies conduct their business overseas may lead to possible rightsizing options. For example, the mission of the National Science Foundation involves interaction with persons throughout Europe and Eurasia and therefore raises the question of whether it needs Paris-based staff. The President’s Management Agenda noted that the full costs of sending staff overseas are unknown. The Deputy Director of OMB testified that there is a wide disparity among agencies’ reported costs for a new position overseas. Without comprehensive cost data, decision makers cannot determine the correlation between costs and the work being performed, nor can they assess the short- and long-term costs associated with feasible business alternatives. We agree with the President’s Management Agenda that staffing decisions need to include a full range of factors affecting the value of U.S. presence in a particular country, including the costs of operating the embassy. However, we found no mechanism to provide the ambassador and other decision makers with comprehensive data on all agencies’ costs of operations at an embassy. This lack of consolidated cost data for individual embassies makes linking costs to staffing levels, embassy priorities, and desired outcomes impossible. This is a long-standing management weakness that, according to the President, needs to be corrected. Our work in Paris demonstrates that this embassy is operating without fundamental knowledge and use of comprehensive cost data. State officials concurred that it is difficult to fully record the cost of all agencies overseas because of inconsistent accounting and budgeting systems. Nevertheless, we were able to document an estimated total cost for all agencies operating in France in fiscal year 2001 at more than $100 million. To do this, we developed a template in consultation with State and OMB to capture different categories of operating costs, such as salaries and benefits, and applied the template to each agency at the embassy. Once costs are known, it is important to relate them to the embassy’s performance. This will allow decision makers to (1) assess the relative cost-effectiveness of various program and support functions and (2) make cost-based decisions when setting mission priorities and staffing levels and determining the feasibility of alternative business approaches. With comprehensive data, State and other agencies could make cost-based decisions at the embassy level as well as on a global basis. Analyses of security, mission, and cost may suggest the need for more or fewer staff at an embassy or an adjustment to the overall staff mix. Independent analysis of each element can lead to changes. However, all three elements of the framework need to be considered together to make reasonable decisions regarding staff size. For example, if the security element is considered in isolation and existing facilities are deemed highly vulnerable, managers may first consider adding security enhancements to existing buildings; working with host country law enforcement agencies to increase embassy protection; reconfiguring existing space to accommodate more people in secure space; and leasing, purchasing, or constructing new buildings. However, consideration of all elements of the framework may suggest additional means for reducing security vulnerabilities, such as reducing the total number of staff. Our framework encourages consideration of a full range of options along with the security, mission, and cost trade-offs. Our framework is consistent with the views of rightsizing experts who have recommended that embassies consider alternative means of fulfilling mission requirements. For example, OPAP concluded that staff reductions should be considered as a means of improving security, and the Chairman of OPAP, in May 2002 testimony, supported elimination of some functions or performing functions from regional centers or the United States.Moreover, President Bush has told U.S. ambassadors that “functions that can be performed by personnel in the U.S. or at regional offices overseas should not be performed at a post.” Our analysis highlights five possible rightsizing options to carry out these goals, but this list is not exhaustive. These suggested options include 1. relocating functions to the United States, 2. relocating functions to regional centers, 3. relocating functions to other locations under chief of mission authority where relocation back to the United States or to regional centers is not practical, 4. purchasing services from the private sector, and 5. changing business practices. Our case study at the Paris embassy illustrates the applicability of these options, which have the potential to reduce the number of vulnerable staff in the embassy buildings. These options may be applicable to as many as 210 positions in Paris. The work of about 120 staff could be relocated to the United States—State already plans to relocate the work of more than 100 of these employees. In addition, the work of about 40 other positions could be handled from other locations in Europe, while more than 50 other positions are commercial in nature and provide services that are available in the private sector. For example: Some functions at the Paris embassy could be relocated to the United States. State is planning to relocate more than 100 budget and finance positions from the Financial Services Center in Paris to State’s financial center in Charleston, South Carolina, by September 2003. In addition, we identified other agencies that perform similar financial functions and could probably be relocated. For example, four Voice of America staff provide payroll services to correspondent bureaus and freelance reporters around the world and would benefit from collocation with State’s Financial Services Center. The Paris embassy could potentially relocate some functions to the regional logistics center in Antwerp, Belgium, and the planned 23-acre secure regional facility in Frankfurt, Germany, which has the capacity for approximately 1,000 people. The Antwerp facility could handle part of the embassy’s extensive warehouse operation, which is currently supported by about 25 people. In addition, some administrative operations at the embassy, such as procurement, could potentially be handled out of the Frankfurt facility. Furthermore, staff at agencies with regional missions could also be moved to Frankfurt. These staff include a National Science Foundation representative who spent approximately 40 percent of his time in 2001 outside of France; four staff who provide budget and finance support to embassies in Africa; and some Secret Service agents who cover eastern Europe, central Asia, and parts of Africa. There are additional positions in Paris that may not need to be in the primary embassy buildings where secure space is at a premium. The primary function of the National Aeronautics and Space Administration representative is to act as a liaison to European space partners. Accomplishing this work may not require retaining office space at the embassy. In fact, the American Battle Monuments Commission has already established a precedent for this, housing about 25 staff in separate office space in a suburb of Paris. In addition, a Department of Justice official works in an office at the French Ministry of Justice. However, dispersing staff raises additional security issues that need to be considered. Given Paris’s modern transportation and communication links and large private-sector service industry, the embassy may be able to purchase services from the private sector, which would reduce the number of full- time staff at risk at the embassy if the services can be performed from another location. We identified as many as 50 positions at the embassy that officials in Washington and Paris agreed are commercial in nature, including painters, electricians, plumbers, and supply clerks. Reengineering business functions could help reduce the size of the Paris embassy. Consolidating inventories at the warehouse could decrease staff workload. For instance, household appliances and furniture are maintained separately by agency with different warehouse staff responsible for different inventories. Purchasing furniture locally for embassies such as Paris could also reduce staffing and other support requirements. Advances in technology, increased use of the Internet, and more flights from the United States may reduce the need for certain full-time permanent staff overseas. Moreover, we have identified opportunities to streamline or reengineer embassy functions to improve State’s operations and reduce administrative staffing requirements, particularly in western Europe, through measures that would reduce residential housing and furniture costs. We reported in March 2001 that State has a number of outmoded and inefficient business processes. Our cost analyses of the U.S. embassy’s housing office in Brussels and the housing support function at the U.S. embassy in London illustrated how reengineering could potentially result in significant savings. To implement the President’s Management Agenda, OMB and State have indicated that they plan to assess staffing requirements, costs, and options at embassies in Europe and Eurasia. As part of this effort, they are attempting to identify staff who could be relocated to the planned regional facility in Frankfurt. Applying our framework in this effort would provide a systematic means of assessing staff levels and considering embassy costs and relocation and other rightsizing options. Furthermore, OMB and State have other initiatives under way that will make it easier to use the framework in the future. For example, to make it easier to consider the costs of the U.S. overseas presence, OMB is gathering data on overseas costs for each agency and the costs of establishing new positions, and is assessing the process by which agencies request funding to assign additional staff overseas. To help assess mission priorities and workload, OMB and State are reviewing how embassies have implemented the revised MPPs, which are designed to more clearly set priorities, and how these plans could be used to determine allocation of embassy resources. We plan to monitor OMB’s progress in implementing the rightsizing initiative and work with it to incorporate comprehensive cost data into the overseas staffing process. Our rightsizing framework was designed to allow decision makers to systematically link embassy staffing levels and requirements to three critical elements of embassy operations—physical security, mission priorities and requirements, and cost. Using our framework’s common set of criteria for making staffing assessments and adjustments would be an important step toward establishing greater accountability and transparency in the overseas staffing process. The key questions of the framework will help decision makers identify the most important factors affecting an embassy’s staffing levels and consider rightsizing options to either add or reduce staff or adjust the staff mix. Rightsizing experts told us that the framework appears applicable to all embassies. Although we have tested it only at the U.S. embassy in Paris and are in the process of refining it, we too believe that the framework can provide guidance for executive branch rightsizing exercises at other embassies. To facilitate the use of a common set of criteria for making staff assessments and adjustments at overseas posts and encourage decision makers to consider security, mission priorities and requirements, and costs, we recommend that the Director of the Office of Management and Budget ensure that our framework is used as a basis for assessing staffing levels in the administration’s rightsizing initiative, starting with its assessments of staffing levels and rightsizing options at U.S. embassies in Europe and Eurasia. OMB and State provided written comments on a draft of this report (see apps. III and IV). OMB said that it appreciated our efforts to develop a rightsizing framework. OMB agreed with the framework’s key elements and options and plans to build upon the framework in examining staffing at all posts within the European and Eurasia Bureau. However, OMB expressed concern regarding whether the GAO methodology can be uniformly applied at all posts worldwide. Nonetheless, OMB noted that it looks forward to working with us and the State Department in using the framework as a starting point to develop a broader methodology that can be applied worldwide. State said that it welcomed our work in developing a framework for rightsizing. State noted the difficulties of previous efforts to develop a methodology, including attempts by the Overseas Presence Advisory Panel and a State-led interagency rightsizing committee. It stated that it has taken steps to regionalize responsibilities in the United States and overseas where appropriate. In addition, State provided technical comments that we have incorporated into this report, as appropriate. To develop the elements of the rightsizing framework and corresponding checklist of suggested questions, we analyzed previous reports on overseas staffing issues, including those of the Accountability Review Boards, OPAP, and the State-led interagency rightsizing committee. We interviewed officials from OMB to discuss the administration’s current rightsizing initiatives in relation to the President’s Management Agenda. We discussed embassy staffing with rightsizing experts, including the Chairman of OPAP and the current and former Undersecretary of State for Management. We also interviewed officials from the Departments of State, Defense, the Treasury, Commerce, Justice, and Agriculture as well as officials from other agencies with personnel in France. To further develop and test the framework, we conducted a case study at the U.S. embassy in Paris. To assess embassy security, we reviewed security reports, interviewed security experts, and made direct observations. To assess missions’ priorities and requirements, we interviewed and collected data from the U.S. Ambassador to France, the Deputy Chief of Mission, and other high-ranking embassy officials as well as officials from more than 35 sections at the Paris embassy. We also interviewed agency officials in Washington, D.C., and in Paris to determine the criteria used by agencies to set staffing levels at the Paris embassy. To assess costs, we interviewed budget and financial management officials from State and collected data on the different categories of operating costs, such as salaries and benefits, from each agency with staff assigned to the Paris embassy. To determine the feasibility of rightsizing actions, we collected and analyzed data associated with (1) relocating certain functions to the United States, regional centers in Europe, or other locations in France and (2) outsourcing or streamlining some functions. We visited State’s regional logistics and procurement offices in Antwerp, Belgium, and Frankfurt, Germany, which have been considered as options for expanded regional operations in Europe. To determine if opportunities exist to outsource functions, we collected and analyzed data on the business and staffing practices of Paris-based businesses, other U.S. embassies in western Europe, and other bilateral diplomatic missions in Paris. We conducted our work between September 2001 and May 2002 in accordance with generally accepted government auditing standards. We are sending copies of this report to other interested Members of Congress. We are also sending copies of this report to the Director of OMB and the Secretary of State. Copies will be made available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-4128. Another GAO contact and staff acknowledgments are listed in appendix V. Physical/technical security of facilities and employees What is the threat and security profile of the embassy? Has the ability to protect personnel been a factor in determining staffing levels at the embassy? To what extent are existing office buildings secure? Is existing space being optimally utilized? Have all practical options for improving the security of facilities been considered? Do issues involving facility security put the staff at an unacceptable level of risk or limit mission accomplishment? Do security vulnerabilities suggest the need to reduce or relocate staff? What are the staffing levels and mission of each agency? How do agencies determine embassy staffing levels? Is there an adequate justification for the number of employees at each agency compared with the agency’s mission? Is there adequate justification for the number of direct hire personnel devoted to support and administrative operations? What are the priorities of the embassy?Does each agency’s mission reinforce embassy priorities? To what extent are mission priorities not being sufficiently addressed due to staffing limitations or other impediments? To what extent are workload requirements validated and prioritized and is the embassy able to balance them with core functions? Do the activities of any agencies overlap? Given embassy priorities and the staffing profile, are increases in the number of existing staff or additional agency representation needed? To what extent is it necessary for each agency to maintain its current presence in country, given the scope of its responsibilities and its mission? - Could an agency’s mission be pursued in other ways? - Does an agency have regional responsibilities or is its mission entirely focused on the host country? What is the embassy’s total annual operating cost? What are the operating costs for each agency at the embassy? To what extent are agencies considering the full cost of operations in making staffing decisions? To what extent are costs commensurate with overall embassy strategic importance, with agency programs, and with specific products and services? What are the security, mission, and cost implications of relocating certain functions to the United States, regional centers, or to other locations, such as commercial space or host country counterpart agencies? To what extent could agency program and/or routine administrative functions (procurement, logistics, and financial management functions) be handled from a regional center or other locations? Do new technologies and transportation links offer greater opportunities for operational support from other locations? Do the host country and regional environments suggest there are options for doing business differently, that is, are there adequate transportation and communications links and a vibrant private sector? To what extent is it practical to purchase embassy services from the private sector? Does the ratio of support staff to program staff at the embassy suggest opportunities for streamlining? Can functions be reengineered to provide greater efficiencies and reduce requirements for personnel? Are there best practices of other bilateral embassies or private corporations that could be adapted by the U.S. embassy? To what extent are there U.S. or host country legal, policy, or procedural obstacles that may impact the feasibility of rightsizing options? The following are GAO’s comments on the Department of State’s letter dated July 9, 2002. 1. We did not set priorities for the elements in the framework that appear in this report. As we state on page 9, decision makers need to consider all three elements of the framework together to make reasonable decisions regarding staff size. 2. In the mission priorities and requirements section, the framework includes the question, “To what extent is it necessary for each agency to maintain its current presence in country?” The amount of time that officials spend in country is a key factor needed to answer the question and in this case the location of the National Science Foundation’s representative in Paris warranted further analysis as a possible candidate for rightsizing. The mandate of the National Science Foundation representative is to communicate with bilateral and multilateral counterpart agencies in more than 35 countries in Europe and Eurasia. The representative stated that he could do his job from any location in Europe, as long as he has high-speed internet connectivity. Given security limitations at facilities in Paris and the availability, in the near future, of secure space in Frankfurt, Germany, decision makers should consider these types of positions for relocation. In addition to the person named above, David G. Bernet, Janey Cohen, Chris Hall, Katie Hartsburg, Lynn Moore, and Melissa Pickworth made key contributions to this report.
There have been recurring calls to evaluate and realign, or "rightsize," the number and location of staff at U.S. embassies and consulates and to consider staff reductions to reduce security vulnerabilities. The Office of Management and Budget is implementing this rightsizing initiative by analyzing the U.S. overseas presence and reviewing the staffing allocation process. This report uses a systematic approach to assess overseas workforce size and identifying options for rightsizing, both at the embassy level and for making related decisions worldwide. GAO's framework links staffing levels to the following three critical elements of overseas diplomatic operations: (1) physical/technical security of facilities and employees, (2) mission priorities and requirements, and (3) cost of operations. Unlike an analysis that considers the elements in isolation, GAO's rightsizing framework encourages consideration of a full range of options, along with the security, mission, and cost trade-offs. Policy makers could use this information to decide whether to add, reduce, or change the staff mix at an embassy.
The Organization for the Prohibition of Chemical Weapons consists of three entities: the Conference of the States Parties, the Executive Council, and the Technical Secretariat. The Conference of the States Parties currently comprises 147 representatives, one from each member state, and oversees the implementation of the convention. The Executive Council, consisting of 41 representatives from regionally distributed member states, meets in sessions throughout the year to supervise the Secretariat’s activities. The Secretariat, headed by the Director-General, manages the organization’s daily operations, including implementing the inspection measures of the convention and preparing the organization’s annual budgets and reports. About 60 percent of the Secretariat’s authorized staff level of 507 employees engages in the inspection-related activities mandated under Articles IV, V, and VI of the convention. Specifically, to verify compliance with Article IV, the Secretariat inspects declared chemical weapons stocks and destruction facilities. To verify compliance with Article V, it inspects and monitors the destruction and conversion of chemical weapons production facilities. Under Article VI of the convention, the Secretariat inspects commercial production facilities. As of July 2002, the organization had conducted 1,210 inspections at the 5,066 declared chemical weapons sites and facilities that fall under the convention’s purview. The Secretariat supports member states in their efforts to implement the convention. It also encourages international cooperation and assistance among the member states as mandated by Articles X and XI of the convention. Under these provisions, the Secretariat is authorized to coordinate the provision of assistance to member states that are the victims of chemical attacks. The Secretariat also encourages economic and technological developments in the field of chemistry by encouraging trade and exchange of information among the member states. The organization’s budget for calendar year 2002 is about $54 million. Funding for OPCW operations comes primarily from the 147 member states’ annual contributions, which are based on the United Nations scale of assessments. The other large source of funding is reimbursement payments for inspections conducted under Articles IV and V of the convention. As required by the convention, members states with chemical weapons related–facilities must reimburse the organization for its inspection costs related to the destruction of chemical weapons (Article IV) and the destruction of chemical weapons production facilities (Article V). The State Department reports annually to Congress on U.S. contributions to international organizations, including the OPCW. In early 2002, the United States and other member states to the convention raised concerns that the organization was not fulfilling its mandate because of a number of management weaknesses. According to the United States, such weaknesses included mismanagement by the organization’s then Director-General, as well as his advocacy of inappropriate roles for the organization—such as attempting to interfere with United Nations weapons inspections in Iraq. To address these management concerns, the Conference of the States Parties voted to remove the former Director- General in April 2002. In July 2002, the Conference appointed a new Director-General. In its budgets, the Secretariat has not accurately projected income and expenses. The Secretariat has overestimated its income for two reasons. First, the budgets include as income the assessed contributions of member states that are in arrears, some of which have not paid their contributions since before 1997. Second, the Secretariat has difficulty predicting and collecting income from inspections conducted at chemical weapons– related facilities. The budgets also include inaccurate expense projections. OPCW’s inaccurate income and expense estimates contributed to a budget deficit in 2000, and a potential deficit for 2002, despite plans to achieve balanced budgets in those years. In developing its budget plans for the past 6 calendar years, the Secretariat has overestimated the amount of income it would receive from member states’ assessed contributions and from reimbursable expenses paid by member states for inspections at chemical weapons–related facilities. When preparing its annual budgets, the Secretariat overestimates the income that it realistically expects to receive from member states’ annual assessments. The Chemical Weapons Convention requires all member states to pay their annual assessments or lose their voting privileges. The Secretariat’s annual budgets, however, included as income the contributions due from 30 member states, even though these members had not paid their annual assessments for at least the 2 previous years. The cumulative total of arrearages over the past several years amounted to almost $1 million as of August 2002. (See app. II for more details.) This includes $781,883 from 16 member states that had not paid any of their assessed or other contributions since before the organization’s inception in 1997. An OPCW official stated that budgeting for arrearages presents a politically sensitive problem for the organization because excluding member states’ assessed contributions from the annual budgets would require approval from the Conference of the States Parties. In response to these budgeting problems, the organization’s Advisory Body on Administrative and Financial Matters and its External Auditor recommended that the Secretariat improve its budgeting practices by developing more accurate and realistic budgets. For example, in 1998, the Advisory Body and the External Auditor stated that the Secretariat’s future budgets should be more realistic and accurate and based on the experience gained in the organization’s first year of operation. In 2000, the External Auditor recommended that income projections, which are used to establish expenditure targets, should be more realistic and based on reasonable and sound assumptions using past trends in the budget. The Secretariat has yet to act on these recommendations. As shown in table 1, every year since 1997, the budgets have overestimated the amount of money that the organization will invoice and receive each year for inspections conducted at chemical weapons–related facilities. As indicated by OPCW documents, the Secretariat often receives its reimbursements from those member states possessing chemical weapons– related facilities late because these states usually do not pay the OPCW during the year that they receive the inspection invoices. Frequently, the organization does not receive payments until several years after issuing the invoices. According to State Department officials, the United States and Russia have not made payments, in many cases, until several years after receiving OPCW invoices, because both governments experienced difficulties in identifying a funding source and obtaining appropriations. These officials added that both governments are working to improve their reimbursement records during 2002. As of June 2002, those states possessing chemical weapons–related facilities, including the United States, owed OPCW more than $2 million in reimbursable inspection expenses from the previous 2 years. The United States accounts for $1.4 million of the $2 million owed. It is difficult, however, for the Secretariat to estimate the number of inspections that will be conducted and therefore the amount of inspection reimbursement payments that can be collected from those states possessing chemical weapons–related facilities. According to State Department and OPCW officials, the Secretariat relies on states’ destruction plans to calculate the number of inspections the organization may conduct during the year. Chemical weapons possessor states cannot always accurately predict when their destruction facilities will become operational and what problems may arise once they do. Any change to the schedule of a destruction facility’s operations can affect the timing of OPCW inspections and thus affect the organization’s reimbursement estimates. In commenting on our draft report, the State Department stated that possessor states’ destruction plans have collectively overstated destruction activity, and consequently monitoring activity, by 30 percent or more. While it may be difficult for the Secretariat to estimate income from inspection reimbursements, the Secretariat does not issue the reimbursement invoices in a timely manner, according to State Department and OPCW officials. Recent OPCW analysis indicates, however, that the organization is working to improve the timeliness of its invoices. In addition, sometimes the invoices are inaccurate, causing those states possessing chemical weapons–related facilities to withhold payment until corrections are made. The organization’s External Auditor recommended in 2001 that the Secretariat take concrete steps to pursue and recover outstanding invoices and develop realistic estimates of its income from Articles IV and V (reimbursable) inspections. In its April 2002 report, the organization’s Advisory Body also recommended that the Secretariat avoid optimistic income forecasts regarding Articles IV and V inspections, as well as expedite and improve its billing procedures. As the result of a staff reclassification and upgrade undertaken in 1999 and mandatory United Nations salary increases, the Secretariat’s personnel costs increased, affecting the 2000, 2001, and 2002 budgets. However, the budgets underestimated this increase. The Secretariat’s budget for 2002 underestimated staff cost increases by about 6 percent ($1.8 million) and may contribute to a potential budget deficit for 2002. The audited financial statement for 1999 and the Advisory Body’s January 2001 report stated that increases in personnel costs were inevitable as a result of the staff reclassification and upgrade. The OPCW’s salary system further complicates the budget projections for staff costs. OPCW uses the United Nations compensation system, which budgets salaries and staff wages in U.S. dollars. The OPCW, however, pays its staff in euros. According to State Department and OPCW officials, the organization has had difficulty in covering the currency risks associated with fluctuations in the dollar-to-euro exchange rate. The organization can experience significant personnel cost increases, depending upon the exchange rate; staff costs represent about 75 percent of OPCW’s 2002 budget. Furthermore, OPCW and State Department officials stated that it is difficult to manage staff costs given the organization’s current tenure policy, which does not clearly establish a start date for OPCW employees. During the creation of the organization, a 7 year tenure policy was established to reduce the number of career employees in the organization. Currently, staff members are hired on a 3 year contract that can be renewed yearly thereafter. However, the Conference of the States Parties has yet to agree on a date for the commencement of the tenure policy. In 2000, the organization experienced a budget deficit of more than $2.8 million when expenditures exceeded the income for the year. In 2001, the Advisory Body reported that the Secretariat was aware of the income shortfall of 2000 and should have managed the budget more carefully to avoid a deficit. It also recommended that, to avoid a recurrence of overspending, the Secretariat should maintain budgetary discipline by matching expenditures to anticipated income in developing the 2001 budget. However, for 2002, the organization may again experience a budget deficit. According to an OPCW briefing document, the organization will experience a potential $5.2 million deficit because of unrealistic income projections in the budget and underbudgeted personnel expenditures. Because of its budget problems, the Secretariat has reduced inspections and international cooperation and assistance efforts and has implemented a hiring freeze. Unless the organization can obtain additional funding, it will have to further reduce its inspections in 2002. The problem will intensify as the number of inspectable facilities increases during the next few years. The Secretariat has curtailed its inspection activities in response to its budget problems. As a result the Secretariat conducted only 200 of the 293 inspections planned for 2001. The Secretariat plans to reduce the number of inspections for 2002 to compensate for the potential deficit of $5.2 million. As of June 2002, OPCW inspectors had conducted only 90 of the 264 inspections planned for the year. Figure 1 depicts the number of inspections planned and conducted from 1997 through June 2002. Since 1997, most OPCW inspection activities have taken place at chemical weapons–related facilities. The Secretariat receives reimbursements from member states for inspections conducted under Articles IV and V of the convention. However, the Secretariat is not reimbursed for inspections carried out at commercial chemical facilities under Article VI. According to OPCW documents, when funding is limited, the Secretariat reduces the number of inspections at commercial chemical facilities that it conducts during the year. Because of its budget problems, OPCW conducted only 75, or 57 percent, of the 132 chemical industry inspections planned for 2001. As of June 2002, the organization had conducted only 47, or 36 percent, of the 132 industry inspections planned for 2002. According to an OPCW document, if additional funding becomes available, a maximum of 11 chemical industry inspections per month can be conducted between the time additional monies are received and the end of 2002. At the same time, the Secretariat cut funding for international cooperation and assistance efforts in 2001 by about one-third, from $3 million to $2 million, and has made further reductions in funding for 2002. The Secretariat also imposed a hiring freeze on OPCW personnel for 2000 through 2002. According to the OPCW’s latest budget proposal, the Secretariat plans to leave 33 positions vacant for 2003. Of these 33 positions, 22 are related to inspection and verification activities. According to OPCW officials, unless it receives additional funding, the OPCW will not be able to completely fulfill its primary inspection functions this year. As of June 2002, six member states have provided about $397,000 in voluntary contributions to help offset the OPCW budget deficit for 2002. According to a State Department official, the United States, France, Germany, and the United Kingdom are considering contributing additional funding to support the organization. The Secretariat’s inspection resources will be further affected by expected increases in the numbers of chemical weapons destruction facilities and commercial chemical facilities requiring OPCW inspections. Specifically, by 2006, the number of continuously operating chemical weapons destruction facilities is expected to increase from 6 to 12. An OPCW planning document also indicates that additional member states may declare more industry facilities. According to the Deputy Director-General, preliminary OPCW estimates indicate that the funding level needed to support inspection activities may increase by 50 percent. The organization has taken some preliminary steps to address its budgeting problems, but it lacks a comprehensive strategy to overcome the inherent weaknesses in its budgeting process. Also, limited oversight resources have affected the organization’s efforts to improve its budgeting process. The State Department has taken some steps to assist the OPCW, but budgeting problems remain. The Secretariat is taking some preliminary steps to improve its budgeting practices. The new Director-General has stated his commitment to ensure that the organization receives the financial resources needed to implement its mandate and that these resources are used exclusively for the objectives and missions outlined in the convention. According to a State Department official, when developing its internal spending plans to implement the budget, the Secretariat has begun to exclude the assessments of member states in arrears. The OPCW is also reducing its estimates of income derived from inspection activities, based on the chemical weapons possessor states’ destruction plans, by 30 percent, to better reflect the historical level of activity. State Department officials also indicated that the Secretariat is working to improve the invoicing and payments process for Articles IV and V reimbursements by providing more accurate bills on a more timely basis. Invoices sent out during the last two months of the calendar year will be applied to the following year’s income projections. State Department officials added that OPCW member states are considering changing the current financial regulations to provide the Secretariat flexibility in using the organization’s Working Capital Fund to cover inspection-related expenses. In commenting on our draft report, the State Department also stated that the Secretariat has begun using actual staff costs to develop more accurate budget forecasts of salary costs. Although the Secretariat’s efforts to collect income from member states is a positive first step in addressing its budget difficulties, it has not directed sufficient attention to improving projections of future expenses. According to State and OPCW officials, the Secretariat does not budget for currency fluctuations in calculating its staff expenses. These officials also stated that current personnel regulations contain a vague employee tenure policy, making it difficult to predict employee turnover and reduce the number of employees. Accordingly, the Secretariat’s recent efforts do not reflect a comprehensive approach to addressing its continuing budget problems. OPCW’s Office of Internal Oversight may play an important role in helping reform the Secretariat’s budget process. In March 2002, the organization’s Advisory Body questioned the role of the oversight office, stating that the office may not be focusing on key internal auditing, monitoring, evaluation, and investigation activities that could detect budgeting problems. In providing its advice and consent to the ratification of the Chemical Weapons Convention, the U.S. Senate required the President to certify that the OPCW had established an independent internal oversight office that would conduct audits, inspections, and investigations relating to the programs and operations of the OPCW. In December 1997, the President certified that the office was in compliance with the Senate’s requirement. However, the OPCW’s 2000 annual report states that only one auditor within the oversight office was responsible for internal audit activities. The 2002 Advisory Body report states that the oversight office was devoting only one-third of its staff resources to conducting audits, while the remaining two-thirds was focused on other functions, such as the implementation of the organization’s confidentiality regime and the establishment of a quality assurance system. In that same report, the Advisory Body reemphasized that the principal and overriding functions of the oversight office should be internal audit, monitoring, evaluation, and investigation. Given the current financial and budgetary crisis, the Advisory Body recommended that the Secretariat redefine the office’s role to ensure a clear and sustained focus on proper management of the budget. The State Department funded a budget consultant to assist the Secretariat in reviewing its budget processes. However, it is difficult to assess the consultant’s impact in improving the budget processes of the organization. According to the State Department, although it reimbursed the Secretariat for the consultant’s salary (including per diem) of $170,000, the consultant was not required to provide the Department with a statement of work or a written analysis of the Secretariat’s budgetary practices and efforts to improve its processes, because he was considered an employee of the Technical Secretariat. According to State Department officials, the United States is also attempting to pay its Articles IV and V inspection reimbursements in a timelier manner and is considering paying in advance the chemical weapons–inspection costs that cover inspector salaries. To assist the organization in meeting its 2002 budget, the State Department is providing $2 million in supplemental funding to restore, to the extent feasible, budgeted levels of inspection activity and to strengthen management and planning functions, among other purposes. Funds will be deposited in a trust fund and will remain available until expended by the OPCW on activities agreed to by the United States. OPCW’s Deputy Director-General and representatives from member states commented that the United States needs to continue in its leadership role by providing financial, managerial, and political support to the organization. According to these officials, the U.S. government’s recent efforts focused primarily on the removal of the former Director-General. The officials added that the United States should now focus on addressing the organization’s budgetary and financial problems. The OPCW has consistently overestimated its income and underestimated its expenses, and thus has planned more inspections than it is financially able to conduct. Unless the Secretariat corrects its weak estimating practices, the Secretariat may continue to plan more inspections than it can undertake. The problem may grow worse in future years as the number of new chemical weapon’s destruction facilities increases and additional states ratify the convention. The organization’s newly appointed Director- General has an opportunity to correct these budgeting weaknesses and improve the organization’s finances. To improve the current budget problems of the Organization for the Prohibition of Chemical Weapons, we recommend that the Secretary of State work with the representatives of other member states and the new Director-General to develop a comprehensive plan to improve the organization’s budgetary practices. The plan should outline specific strategies to (1) improve the projection and collection of income, (2) accurately project expenses, and (3) strengthen the role of the Office of Internal Oversight in helping the organization improve its budgeting process. Such a plan would be consistent with the budget recommendations of the Secretariat’s oversight bodies. To ensure that Congress is informed about the status of efforts to improve the OPCW’s budgeting practices, we recommend that the Secretary of State annually report to Congress on the extent to which the OPCW is correcting its budgeting weaknesses and implementing the recommendations made by the organization’s oversight bodies. We received written comments on a draft of this report from the State Department that are reprinted in appendix III. We also received technical comments from the State Department and have incorporated them where appropriate. The State Department generally concurred with our findings that budgetary and financial problems have plagued the OPCW, and that unless corrected, these problems could have even more dramatic effects in coming years. The Department, however, raised several issues with the report. First, the Department asserted that our analysis of OPCW budgetary and financial difficulties presented an incomplete picture of the OPCW’s budgeting practices. Second, the State Department disputed our assertion that we had to limit the scope of our review because of the access restrictions we encountered during our May 2002 visit to the OPCW in The Hague. Third, it stated that our report did not fully reflect the changes that the OPCW has recently begun taking to address its budget weaknesses. Finally, the Department disagreed with our recommendation that the Secretary of State be required to report annually to Congress on how the OPCW is correcting its budget weaknesses, asserting that such a requirement would impose an administrative burden. In response to the State Department’s comments on our draft report, we added information on the reasons why the OPCW experienced budget problems. Regarding our access to OPCW records and staff, although the State Department provided us with some access to OPCW budget and finance documents through the Department’s offices in Washington, D.C., we were denied the opportunity to review related budget documentation and meet with numerous OPCW officials during our visit to The Hague in May 2002. Although we provided the State Department with an extensive list of OPCW officials with whom we wanted to meet prior to our visit, we were allowed to meet only with the Deputy Director-General and selected representatives from the budget office and the inspection equipment laboratory. We were not allowed to meet with representatives from key OPCW offices, including the Special Projects Division, the Office of Internal Oversight, the Office of the Legal Advisor, the Administration Division, the Verification Division, the Inspection Division, the International Cooperation and Assistance Division, and the Advisory Body on Administrative and Financial Matters. Furthermore, the State Department failed to notify us of any potential access difficulties with the OPCW prior to our trip to The Hague, and did not actively seek to provide us with access to these officials on our arrival. Consequently, we had to limit the scope of our review to budget-related issues. In response to the State Department’s comments about recent budgetary initiatives, we have updated the report to reflect the most current initiatives being undertaken by the OPCW to address its budgeting problems. Regarding our recommendation for an annual reporting requirement, we do not believe that such a requirement would impose an administrative burden on the Department, since it already provides various reports to Congress on international organizations. This reporting requirement is necessary to improve congressional oversight of the OPCW. We are providing copies of this report to other interested congressional committees and to the Secretary of State. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-8979 if you or your staff have any questions concerning this report. Another GAO contact and staff acknowledgments are listed in appendix IV. We could not conduct a comprehensive management review of the organization as requested, because the Organization for the Prohibition of Chemical Weapons (OPCW) and State Department officials limited our access during our visit to The Hague in May 2002. As a result of our lack of access to OPCW officials and limited access to OPCW documents, we could not determine how the reduction in chemical weapons and industry inspections has affected the implementation of the Chemical Weapons Convention. In addition, we could not assess the organization’s personnel management, administrative, and internal audit functions. Specifically, we were not permitted to meet with or obtain information from OPCW officials from the following offices: the Special Projects Division, the Office of Internal Oversight, the Office of the Legal Advisor, the Administration Division, the Verification Division, the Inspection Division, the International Cooperation and Assistance Division, and the Advisory Body on Administrative and Financial Matters. However, we met with OPCW’s Deputy Director-General. We also received a budget briefing from the Director of the Administrative Division and the budget consultant being funded by the State Department. In addition, we visited the inspection laboratory and equipment store at Rijswijk, the Netherlands. To determine the accuracy of the Secretariat’s budgets, we compared OPCW’s program and budget documents for 1997–2003 with the data in the audited financial statements for 1997–2001. To compare budget and program data, figures were converted from Netherlands guilders and euros to 2001 dollars, using appropriate exchange and inflation rates. We also reviewed other OPCW documents, including the organization’s financial regulations and annual reports. We analyzed reports prepared by the organization’s External Auditor, the Advisory Body on Administrative and Financial Matters, and the Office of Internal Oversight. In addition, we obtained information from officials in the State Department’s Bureau of Arms Control and Office of International Organization Affairs, as well as from member states’ representatives to OPCW. To determine the impact of budget shortfalls on the organization’s inspection and international cooperation activities, we analyzed the data contained in the organization’s program and budget documents and in annual implementation reports for calendar years 1997–2001. To confirm our understanding of the data obtained, we met with an official from the State Department’s Bureau of Arms Control. In addition, we reviewed other OPCW documents and statements provided by the State Department. To assess OPCW and State Department efforts to improve the organization’s budget-planning practices, we met with State Department officials in Washington, D.C., and The Hague. We also obtained information from OPCW member states’ representatives. We reviewed and analyzed OPCW and State Department documents, including OPCW’s draft Medium- Term Plan for 2004–2006; speeches given by the Director-General to the Executive Council and Conference of the States Parties; and reports of the Advisory Board on Administrative and Financial Matters, the External Auditor, and the Office of Internal Oversight. We could not independently verify the accuracy of the budget and other financial data obtained from OPCW and the State Department. Although we met with, and obtained documents from, officials at the Departments of Commerce and Defense, the information they provided was not relevant to the reduced scope of our work. We performed our work from January 2002 through October 2002 in accordance with generally accepted government auditing standards. $60,127 $163,958 $253,143 $71,155 The Preparatory Commission for the Organization for the Prohibition of Chemical Weapons preceded the OPCW and carried out the initial implementation of the Chemical Weapons Convention. Under the Preparatory Commission, member states were assessed contributions to fund the commission’s expenses. The following are GAO’s comments on the Department of State’s letter, dated October 16, 2002. 1. We agree that monitoring activities at chemical weapons–destruction facilities account for most of OPCW’s workload, and that to project this workload, the organization has depended on plans submitted by chemical weapons–possessor states. Our report states that since 1997, most OPCW inspection activities have taken place at chemical weapons facilities. Our report also states that the Secretariat relies on possessor states’ destruction plans to calculate the number of inspections the organization may conduct during the year. Chemical weapons– possessor states cannot accurately predict when their destruction facilities will become operational and what problems may arise when they do. However, in response to the State Department’s comments, we have included additional information in the report to clarify this point. 2. We identified the key reasons why OPCW underestimated staff costs for calendar years 2000–2002, and included this information in the report. For example, our report states that as the result of a staff reclassification and upgrade undertaken in 1999 and mandatory United Nations salary increases, the Secretariat’s personnel costs increased, affecting the 2000, 2001, and 2002 budgets. 3. We agree that the OPCW encounters the same difficulties as other international organizations with regard to the late payment of annual dues, and that the United States and Russia have experienced difficulties in paying their Articles IV and V inspection bills. We included this additional information in the report. 4. We agree that the OPCW has lacked adequate liquidity to deal with its cash shortages, and this has resulted in a curtailment of inspection activity. We have made no change to the report, however, because this is its major theme. We reported that weak budgeting practices and budget deficits have affected the organization’s ability to perform its primary inspection and international cooperation activities, as outlined in the Chemical Weapons Convention. 5. As explained in our report, the OPCW spent against budgeted income based on inflated estimates of inspection activity. This budget shortfall resulted in reduced inspections and international cooperation activities. We do not believe that a change in our report is needed. 6. Our report clearly states that since 1997, most OPCW inspection activities have taken place at chemical weapons facilities. Because of its budget problems, the OPCW conducted only 57 percent of the chemical industry inspections planned for 2001. As of June 2002, it had conducted only 36 percent of these inspections planned for 2002. We do not believe that a change in our report is needed. 7. We disagree that the State Department made every reasonable effort to accommodate our requests for information and access to OPCW staff. We were not allowed to hold meetings with representatives from several key OPCW offices. The State Department failed to notify us of any impending scheduling difficulties prior to our trip to The Hague in May 2002. On our arrival, the Department made no effort to facilitate meetings with the following offices: the Special Projects Division, the Office of Internal Oversight, the Office of the Legal Advisor, the Administration Division, the Verification Division, the Inspection Division, the International Cooperation and Assistance Division, and the Advisory Body on Administrative and Financial Matters. 8. This comment confirms that we were able to meet with only a few select OPCW staff. It is unclear how the State Department concluded that we were unable to identify specific questions to which answers were not provided. Prior to our departure for The Hague in May 2002, we provided State Department officials in Washington and at the U.S. Delegation to the OPCW with five pages of detailed questions that we planned to raise with OPCW officials. Many of these questions remain unanswered. We also provided the State Department with a detailed set of questions we planned to raise with representatives from other member states. 9. We have updated our report to provide the most recent information on OPCW initiatives currently under way. However, the State Department’s mosaic of measures does not represent an overall strategy or plan for improving the organization’s budgeting weaknesses. At best, it represents only the first steps in addressing systemic weaknesses in the OPCW’s budgeting process. 10. We believe that our recommendation for an annual reporting requirement to Congress is appropriate. Such reporting will help establish a baseline for judging OPCW progress in achieving needed reforms. In addition, this requirement will not impose an undue administrative burden on the Department, since it already provides various reports to Congress on international organizations, including the OPCW. In addition to the individual named above, Beth Hoffman León, Richard K. Geiger, and Reid Lelong Lowe made key contributions to this report. Bruce Kutnick, Christine Bonham, and Geoffrey Frank provided additional assistance. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
The Organization for the Prohibition of Chemical Weapons is responsible for implementing the Chemical Weapons Convention, which bans the use of chemical weapons and requires their elimination. The United States and other member states have raised concerns that a number of management weaknesses may prevent the organization from fulfilling its mandate. As requested, GAO assessed the accuracy of the organization's budget and the impact of budget shortfalls on program activities. GAO also reviewed efforts to improve the organization's budget planning. Since its establishment in 1997, the ability of the Organization for the Prohibition of Chemical Weapons (OPCW) to carry out key inspection functions has been hindered by inaccurate budget projections and, more recently, budget deficits. The organization has consistently overestimated its income and underestimated its expenses. Its budgets have recorded as income nearly $1 million in unpaid assessments owed by 30 member states. The budgets have also overestimated reimbursement payments for inspections conducted in member states with chemical weapons-related facilities. As of June, 2002, these states owed the organization more than $2 million. Furthermore, the budgets for 2000 through 2002 underestimated personnel expenses. The organization's inaccurate income and spending estimates contributed to a $2.8 million deficit in 2000 and a potential deficit of $5.2 million in 2002. Weak budgeting practices and budget deficits have affected the organization's ability to perform inspection activities as mandated by the Chemical Weapons Convention. The organization had to reduce the number of inspections it conducted in 2001 and plans to reduce the number it conducts in 2002. Although the organization and the State Department have taken some steps to address the budget problems, the organization has not developed a comprehensive plan to overcome its inherent weaknesses. Unless the organization improves its planning, budget shortfalls will continue to affect its ability to conduct inspections.
Since its founding in 1718, the city of New Orleans and its surrounding areas have been subject to numerous floods from the Mississippi River and hurricanes. The greater New Orleans area, composed of Orleans, Jefferson, St. Charles, St. Bernard, and St. Tammany Parishes, sits in the tidal lowlands of Lake Pontchartrain and is bordered generally on its southern side by the Mississippi River. Lake Pontchartrain, a tidal basin of some 640 square miles, is connected with the Gulf of Mexico through Lake Borgne and the Mississippi Sound. The greatest natural threat posed to the New Orleans area is from hurricane-induced storm surges, waves, and rainfalls. Because of this threat, a series of control structures, concrete flood walls, and levees was proposed for the area along Lake Pontchartrain in the 1960s. Congress first authorized the construction of the Lake Pontchartrain and Vicinity, Louisiana Hurricane Protection Project in the Flood Control Act of 1965 to provide hurricane protection to areas around the lake in Orleans, Jefferson, St. Bernard, and St. Charles Parishes. Although federally authorized, the project was a joint federal, state, and local effort. The Corps was responsible for project design and construction of the approximately 125 miles of levees, with the federal government paying 70 percent of the costs, and state and local interests paying 30 percent. Each of the four parishes protected by the project is associated with a local levee district that is generally composed of state-appointed officials and is considered a state entity. Specifically, Orleans Parish is associated with the Orleans Levee District, Jefferson Parish is associated with the East Jefferson Levee District, St. Bernard Parish is associated with the Lake Borgne Levee District, and St. Charles Parish is associated with the Pontchartrain Levee District. These levee districts are the local sponsors of the project, and their responsibilities include ensuring the integrity of the levee system in their districts throughout the year. Congress authorized the Lake Pontchartrain project in 1965, substantially in accordance with a Chief of Engineers report, to protect the areas around the lake from flooding caused by storm surge or rainfall associated with a standard project hurricane. For the coastal region of Louisiana, a standard project hurricane was expected to have a frequency of occurrence of once in about 200 years, and represented the most severe combination of meteorological conditions considered reasonably characteristic for the region. According to the Chief of Engineers report, a standard project hurricane was selected as the design hurricane because of the urban nature of the area. When Congress authorized the Lake Pontchartrain project, the 1 through 5 scale—known as the Saffir-Simpson Scale—that is currently used by the National Weather Service to categorize hurricanes from lowest to highest intensity did not yet exist. According to the Corps, the standard project hurricane used for the Lake Pontchartrain project would roughly equal a fast-moving category 3 hurricane on the Saffir-Simpson Scale. In fact, the standard project hurricane for coastal Louisiana approximates the storm surge of a category 3 hurricane, the wind speed of a category 2 hurricane, and the barometric pressure at the center of a category 4 hurricane. Table 1 compares the coastal Louisiana standard project hurricane parameters to which the Lake Pontchartrain project was designed with the parameters for category 2, 3, and 4 hurricanes on the Saffir-Simpson Scale. At landfall, which was approximately 60 miles southeast of New Orleans, Hurricane Katrina had a central pressure of 27.17 Hg and a wind speed of 140 mph. Wind speeds in New Orleans, which was west of the eye of Hurricane Katrina, reached just over 100 mph. According to the National Oceanic and Atmospheric Administration’s National Climatic Data Center, data on other Hurricane Katrina parameters are not readily available for several reasons, including the destruction of certain buildings and monitoring equipment and would have been used to measure storm surge. Consistent with federal law, agreements between the Corps and local sponsors of the Lake Pontchartrain project specify that local sponsors are responsible for operation, maintenance, repair, replacement, and rehabilitation of the levees when the construction of the project, or a project unit, is complete. However, the Corps has authority to (1) repair the project if deficiencies are the result of the original construction and (2) rehabilitate the project, if damage resulted from a flood and the project is active in the Corps’ Rehabilitation Inspection Program. Corps district and division employees are to oversee OMRR&R activities performed by the local sponsors on an annual basis. Once construction of Lake Pontchartrain project units were completed, the Corps was to transfer these project units to the local sponsors for OMRR&R. These sponsors include the Orleans, East Jefferson, Lake Borgne, and Pontchartrain levee districts. Although the Corps has not yet provided us with dates on when the project units for the Lake Pontchartrain project were completed, after Hurricane Katrina, the Corps’ New Orleans District and the Department of Defense’s Task Force Guardian determined, based on three criteria, that almost the entire Lake Pontchartrain hurricane project had been turned over to local sponsors for ongoing OMRR&R responsibilities. The criteria used to make this determination were (1) if the project unit was completed in accordance with the designed level of protection specified in the project decision document, (2) if the project unit was being operated and maintained by the local sponsor, and (3) if the project unit had passed the annual Inspection of Completed Works in accordance with Corps regulations. Based on this evaluation, the task force determined that only three project units—a bridge over the 17th Street canal, a project unit in Jefferson Parish, and a project unit in St. Charles Parish—had not yet been completed and turned over to the local sponsors. Figure 1 shows the three project units that have not been completed and turned over to the local sponsors. While the assurances signed by local sponsors do not define project completion, internal Corps regulations provide that completed projects or completed project units will normally be turned over when all construction, cleanup work, and testing of mechanical, electrical, and other equipment are complete and the project is in proper condition for the assumption of operation and maintenance by the local sponsors. Transfer is to be accomplished through a formal notice from the Corps to the local sponsor that includes a transfer date determined by the Corps’ district engineers. According to Corps officials, the formal notice generally is in the form of a letter to the local sponsor. According to internal Corps regulations, upon transfer of a completed project to the local sponsors, the Corps may no longer expend federal funds on construction or project improvements. If the Corps determines that unsatisfactory conditions have developed as a result of the original levee construction, the Corps may undertake corrective action. For example, a Corps district official responsible for operations and maintenance oversight told us that if settlement of a completed levee occurs, this is not considered a design or construction flaw. Instead, this is considered a condition that should be addressed by the local sponsors as part of their normal operations and maintenance responsibilities. Local sponsors’ responsibilities for OMRR&R of the completed portions of the Lake Pontchartrain project were established through local assurances signed by the levee districts and the Corps. For the Lake Pontchartrain hurricane project as constructed, these assurances were signed, and subsequently accepted by the federal government for the Orleans Levee District on June 21, 1985; the Pontchartrain Levee District on August 7, 1987; the East Jefferson Levee District on December 21, 1987; and the Lake Borgne Basin Levee District on December 7, 1977. The formal assurances commit the local sponsors to, among other things, operate and maintain all features of the project in accordance with Corps regulations. Also, in accordance with internal Corps regulations, the Corps is required to provide local sponsors with an operations and maintenance manual at the time of, or at the earliest practicable date after, the transfer of OMRR&R responsibilities from the Corps to local sponsors for a completed project or project unit. The manual is intended to assist the responsible local authorities in carrying out their operation and maintenance obligations. According to Corps officials, the OMRR&R responsibilities for levees are straightforward, and the manual that the Corps provides local sponsors is a one-page document that outlines the requirements as described by federal regulations. Specifically, federal regulations require local sponsors to ensure that the structure is operating as intended and to continuously patrol the structure during flood periods to ensure that no conditions exist that might endanger the structure and to take immediate steps to control any condition that might endanger it. For maintenance, the regulations require local sponsors to ensure at all times that the structure is serviceable in times of flood. The regulations also require periodic inspections and maintenance measures, including the following: promoting the growth of sod, including routine mowing of the grass and removing drift material or wild growth from the levee (such as brush and trees); and repairing any damage to the levee caused by erosion. Repair, replacement, and rehabilitation are also considered part of the local sponsors’ maintenance responsibilities, as outlined in internal Corps regulations. Repair refers to routine activities that maintain the project in well-kept condition; replacement refers to replacing worn-out elements; and rehabilitation refers to activities necessary to bring a deteriorated project back to its original condition. According to internal Corps’ regulations, local sponsors’ maintenance is considered to be deficient when these requirements have not fulfilled. Corps employees are to oversee local sponsors’ OMRR&R activities to ensure compliance and project integrity. Corps employees are required to work directly with local sponsors to conduct annual compliance inspections; review local sponsors’ semiannual compliance reports; and respond to engineering concerns, maintenance questions, and reports of problems. A Corps district official responsible for operations and maintenance oversight told us that generally the Lake Pontchartrain project’s local sponsors have performed their operations and maintenance responsibilities as required and have been responsive to the Corps’ concerns. Because the New Orleans district is part of the Mississippi Valley Division of the Corps, the division also has responsibility for managing and overseeing the periodic inspections conducted by district engineers; reviewing and approving district engineers’ inspection reports; maintaining a database of information on inspections and remedial measures taken; and receiving annual OMRR&R summary reports from the districts under its command, aggregating these reports, and sending them to Corps headquarters. Federally authorized flood control projects, such as the Lake Pontchartrain project, are eligible for 100 percent federal rehabilitation if damaged by a flood as long as these projects are active in the Corps’ Rehabilitation Inspection Program (rehabilitation program). To maintain active status in this program, the Lake Pontchartrain project’s levees are required to pass an annual OMRR&R inspection conducted jointly by the Corps, the local sponsor, the state Department of Transportation and Development, and other stakeholders, as appropriate. According to the Corps’ inspection reports from 2001 through 2004, all completed project units of the Lake Pontchartrain project were inspected each year and had received an acceptable rating. Both local sponsors and the Corps are required to conduct oversight activities to ensure that levees are properly maintained. If, in the course of these oversight activities, the Corps finds that the local sponsors are not properly maintaining the levees, internal Corps regulations outline a series of steps that the Corps can take until the local sponsor comes into compliance. Federal regulations require that local levee districts are to appoint a permanent committee, headed by a superintendent, that will be responsible for all levee operation and maintenance activities and inspections of federally constructed flood control projects. The superintendent of the levee district is responsible for performing periodic inspections of the levee to ensure that routine maintenance responsibilities have been effectively completed and that no hazards to the levee exist. Typically, these inspections take place prior to the flood or hurricane season, immediately following a high-water period, and at other intermediate periods throughout the year. During an inspection, the superintendent is required to examine and be certain, among other things, that drainage systems are in good working condition and not becoming no unusual settlement or material loss of grade or levee cross section has cattle guards and gates are in good condition; the protective walls surrounding the levee have not been washed out or removed; the levee crown is shaped to drain readily; no unauthorized vehicular traffic or cattle grazing has occurred; no water seepage or saturated areas are occurring; and levee access roads are being properly maintained. If, during these inspections, the superintendent discovers any levee portion to be in substandard condition, it is the levee district’s responsibility to take immediate actions to correct the inadequacy. The superintendent is required to submit a report twice a year to the Corps District Engineer covering inspection, maintenance, and operation activities of the levee district. At this time, we have not examined the extent to which these steps were taken by the local sponsors, and the Corps has not provided us any documentation of such activities. The Corps is responsible for overseeing the OMRR&R activities of the Lake Pontchartrain project’s local sponsors through an annual compliance inspection program—known as the Inspection of Completed Works program—and reviewing the local sponsors’ semiannual reports on OMRR&R activities submitted to the district office. According to internal Corps regulations, the primary purposes of the Inspection of Completed Works program are to prevent loss of life and catastrophic damages, preserve the value of the federal investment, and encourage local sponsors to bear responsibility for their own protection. According to Corps officials, for the Lake Pontchartrain project, the New Orleans District typically completes this annual compliance inspection prior to the hurricane season, in mid-May to early-June of each year. Our review of Corps inspection reports for 2001 through 2004 indicate that while inspections of the Lake Pontchartrain hurricane protection levees in the Orleans and St. Bernard Parishes were generally conducted in May of each year, the inspections of the levees in Jefferson and St. Charles Parishes were generally conducted in the September to November timeframe. According to the Corps, these inspections are to cover the following items: structural foundations. Based on the results of these inspections, the district and division are to characterize the inspected units on a scale from 1 to 3, where 1 means that the project units have been maintained in accordance with the agreement between the Corps and the local sponsors and are expected to perform as designed, and 3 means that the project units have maintenance deficiencies such that the project would probably fail during floods of project design or lesser magnitudes. Within 120 days of an inspection, the district is expected to prepare an inspection report and provide it to its commanding unit. For example, the New Orleans District should prepare an inspection report for the Lake Pontchartrain project and forward it to the Mississippi Valley Division for review and approval. Reports that indicate maintenance deficiencies are also to be submitted annually to headquarters. All of the completed units of the Lake Pontchartrain hurricane levees passed with an acceptable rating for the period 2001 through 2004. If a project receives a rating of 3 as a result of an inspection, internal Corps regulations outline a progression of steps that the Corps can take to ensure that local sponsors fulfill their OMRR&R responsibilities and bring the levees back up to the designed level of protection. The steps are as follows: Notify the sponsor orally of the deficiencies. Notify the sponsor in writing. Write a letter to the governor and the appropriate state agencies—which, in the case of the Lake Pontchartrain project, is the Department of Transportation and Development in Louisiana—to enlist state participation to resolve the problem. Notify the Federal Emergency Management Agency (FEMA) of the condition of the project. If acceptable actions are not taken by the nonfederal sponsor, take actions to remove the project from eligibility for federal emergency rehabilitation. Initiate legal action against the local sponsor to enforce OMRR&R obligations as outlined in local assurances. Transmit a report to the Congress recommending authorization of a new sponsor or reauthorization of the project along with measures to eliminate hazards. Although not documented in the annual inspection reports, according to Corps officials, almost all past Lake Pontchartrain project deficiencies have been resolved upon oral notification of the local levee district. The official responsible for the Inspection of Completed Works program in New Orleans only could recall one or two instances when the Corps wrote a letter to a local sponsor requesting that the sponsor commit resources to repair a deficiency, which resulted in full compliance by the local sponsor. Internal Corps regulations specifically prohibit the use of federal funds to correct problems caused by a lack of adequate local maintenance. The Corps has authority to provide a variety of emergency response actions when levees fail or are damaged. Section 5 of the Flood Control Act of 1941, as amended, commonly referred to as Public Law 84-99, authorizes the Corps to conduct emergency operations and rehabilitation activities when levees fail or are damaged. In addition, under the Robert T. Stafford Disaster Relief and Emergency Assistance Act (Stafford Act), as amended, the Corps and other federal agencies may be tasked by FEMA to provide disaster response, recovery, and mitigation assistance to state and local governments. Furthermore, a Department of Defense Manual for Civil Emergencies assigns responsibilities, prescribes procedures, and provides guidance by which the Department of Defense responds to all hazards in accordance with the Stafford Act. Although we have not evaluated the Corps’ efforts, Corps officials told us that after the levees were breached the Corps used its response and rehabilitation authorities to provide flood-fighting assistance and to begin the repair and restoration of the levees. State and local roles and responsibilities when levees fail are similar to the Corps’ responsibilities and are also described in federal regulations. Public Law 84-99 authorizes the Corps to conduct emergency operations and rehabilitation activities when levees fail or are damaged during storms or other events. Federal regulations specify that assistance is limited to providing emergency assistance to save lives and protect property, such as public facilities/services and residential, commercial, or industrial developments. This emergency assistance may be provided during and following a flood or coastal storm. However, under federal regulations, nonfederal interests must fully utilize their own resources, including manpower, supplies, equipment, and funds before Corps assistance may be provided. The National Guard, as part of the state’s resources when it is under state control, must be fully utilized as part of the nonfederal response. According to federal regulations, the Corps is not to use funds to reimburse local authorities for the costs of these emergency activities. To implement flood response operation authorities under Public Law 84- 99, internal Corps regulations specify that Corps district commanders must issue a Declaration of Emergency. The Declaration of Emergency may initially be verbal, but must be made in writing and reported in the district’s situation report within 24 hours. Authority to issue a Declaration of Emergency has been delegated to deputy district engineers and includes all supervisors in the chain of command, from the district commander to the chief of emergency management. Emergency operations include flood response and postflood response activities. Flood response includes activities such as flood fighting and rescue operations. These activities include providing technical assistance, such as review and recommendations in support of state and local efforts and help determining feasible solutions to uncommon situations, and direct assistance by directing flood-fighting operations; and contingency contracting for emergency operations. Corps assistance during flood-fighting operations is to be temporary to meet the immediate threat and to supplement state and local efforts. This assistance is not intended to provide permanent solutions to flood problems and should be terminated when the emergency is over—for example, when flood waters have receded sufficiently. Postflood response includes emergency debris removal, temporary restoration of critical transportation routes and public services and utilities, and after action review and reporting. Rehabilitation activities include the repair and restoration of eligible flood control projects and federally constructed hurricane or shore protection projects. Rehabilitation assistance is limited to federal and nonfederal flood control works that are in active status—those found to be properly maintained during inspections—in the Corps’ Rehabilitation Inspection Program at the time of the hurricane, storm, or flood event. Rehabilitation assistance is limited to repair or restoration of a flood control work to its predisaster condition and level of protection (e.g., the actual elevation of the levee, allowing for normal settlement). Any damage to federally constructed levees are repaired with 100 percent of the cost borne by the federal government; and damage to nonfederally constructed levees are repaired with 80 percent of the cost borne by the federal government and 20 percent by the local sponsor. Because the Lake Pontchartrain project is federally constructed and was active in the Corps’ Rehabilitation Inspection Program, the Corps is authorized to rehabilitate any levees that failed or were damaged as a result of Hurricane Katrina, using this authority. Additionally, in the aftermath of Hurricane Katrina, the Assistant Secretary of the Army for Civil Works agreed to rehabilitate all of the damaged Lake Pontchartrain and other hurricane and flood control structures in the New Orleans area without any local cost share, under emergency authority provided in statute. Further, the federal government will fund the acquisition of lands, easements, rights-of- way, and disposal or borrow areas not owned or under control of the nonfederal sponsor, as well as the performance of relocations, that are needed for the rehabilitation and that are normally local responsibilities. The Corps estimates that funding these activities for the Lake Pontchartrain project will cost the federal government an additional $10 million and over $248 million in total for all damaged levee systems in the New Orleans area. The Stafford Act, as amended, authorizes federal agencies, including the Corps, to take emergency response actions when the President has issued a major disaster declaration. Under the act, a presidential declaration may be made after receiving a request from the governor of the affected state. FEMA, within the Department of Homeland Security, is responsible for administering the major provisions of the Stafford Act. Actions taken under this authority include disaster response, recovery, and mitigation assistance to supplement state and local efforts. To meet its obligations for emergency response, the Department of Homeland Security developed a National Response Plan, which describes the roles and responsibilities of various federal agencies. Within the National Response Plan, the Department of Defense has responsibility for Emergency Support Function #3—Public Works and Engineering. The plan designates the Corps as the operating agent for this function, to include planning, preparedness, and response, with assistance to be provided by other branches of the Department of Defense, as needed. The National Response Plan lists the following activities for the Corps: coordination and support of infrastructure risk and vulnerability participation in preincident activities, such as prepositioning assessment participation in postincident assessments of public works and infrastructure to help determine critical needs and potential work loads; implementation of structural and nonstructural mitigation measures to minimize adverse effects or fully protect resources prior to an incident; execution of emergency contracting support for life-saving and life- sustaining services, to include providing potable water, ice, emergency power, and other emergency commodities and services; providing assistance in monitoring and stabilizing damaged structures, and demolishing structures designated as immediate hazards to public health and safety, and providing structural specialist expertise to support inspection of mass care facilities and urban search and rescue operations; providing emergency repair of damaged infrastructure and critical public facilities, and supporting the restoration of critical navigation, flood control, and other water infrastructure systems; managing, monitoring, and providing technical advice in the clearance, removal, and disposal of debris from public property and the re- establishment of ground and water routes into impacted areas; and implementing and managing FEMA’s Public Assistance Program and other recovery programs involving federal, state, and tribal officials, including efforts to permanently repair, replace, or relocate damaged or destroyed public facilities and infrastructure. A Department of Defense Manual For Civil Emergencies assigns responsibilities, prescribes procedures, and provides guidance by which the Department of Defense responds to all hazards in accordance with the Stafford Act. The policy states that commanders may conduct disaster relief operations when a serious emergency or disaster is so imminent that waiting for instructions from higher authority would preclude effective response. According to the policy, commanders may do what is required and justified to save human life, prevent immediate human suffering, or lessen major property damage or destruction. Action taken in accordance with the policy is limited to 10 days. A Corps commander providing assistance to civil authorities under this guidance is not required to obtain an agreement for reimbursement from the requesting agency before providing assistance. The Corps is authorized by Public Law 84-99 to prepare for emergency response when levees fail by undertaking disaster preparedness, advance measures, and hazard mitigation activities. Although we have not evaluated the Corps’ efforts, Corps officials told us that they took action in advance of Hurricane Katrina to prepare for the potential flooding that was predicted. As part of this effort, according to Corps officials, the Corps’ New Orleans district used a draft hurricane preparedness plan for the New Orleans area. Corps division and district commanders are responsible for providing immediate and effective response and assistance prior to, during, and after emergencies and disasters. Although we have not reviewed the extent to which the Corps undertook these initiatives during the Katrina disaster, the Corps is responsible for the following: 1. Creating an emergency management organization. Division and district commanders are expected to provide adequate staffing for a readiness/emergency management organization to accomplish the preparedness mission. In addition, divisions and districts should have teams readily available to provide assistance under the Corps’ authorities for flood emergencies and other natural disasters; execute responsibilities and missions under the Stafford Act and the National Response Plan; staff a Crisis Management Team, consisting of an Emergency Manager and senior representatives from technical and functional areas to provide guidance and direction during emergency situations; and staff a Crisis Action Team, consisting of the personnel necessary to operate an emergency operations center. 2. Establishing and maintaining plans and procedures. Corps headquarters, divisions, and districts are expected to prepare and maintain plans for emergencies and disasters, establishing an alternate emergency operations center, and reconstituting the district. These operation plans should cover emergency/disaster assistance procedures under all applicable authorities and potential mission assignments. Each division and district should have, at a minimum, an operation plan that provides procedures for generic disasters within the division and district. The plan should include general topics, such as activating, staffing, and operating the emergency operations center; reporting requirements; notification and alert rosters; and organizing for response to disasters. The plan should also have one or more appendixes that specifically address the disasters most likely to impact the division and district. Operation plans are reviewed and updated annually to reflect administrative changes. The division/district’s generic or principal disaster operation plan is supposed to be reviewed, revised, and republished biennially. 3. Training personnel for response. Divisions and districts are expected to ensure that personnel who are assigned emergency assistance responsibilities have been properly trained. 4. Conducting exercises. Exercises are to be conducted at least once every two years, consistent with available funding. This requirement may be waived if an actual emergency response was conducted during the two-year period that was of sufficient magnitude to have adequately trained emergency team members and other personnel. 5. Establishing adequate command and control facilities. Divisions, districts, and other Corps groups should provide a dedicated facility for an emergency operations center that will be able to provide command and control for emergency/disaster response and recovery activities. 6. Maintaining supplies, tools, and equipment. Divisions and districts are expected to maintain equipment and supplies that can be readily available for use by the emergency operations center, disaster field offices, disaster field teams, planning response teams, and similar entities. Equipment should be stockpiled for use during emergency operations and exercises. 7. Managing inspections of flood control projects. The Corps is responsible for ensuring that the levees are properly maintained to perform as designed during flood events. The Corps may take advance measures prior to a flooding event to protect against loss of life and significant damages to urban areas and public facilities. In the case of imminent danger of levee failure or overtopping, the Corps can also take corrective actions to ensure the stability, integrity, and safety of the levee. Advance measures include the following: 1. Technical assistance: providing technical review, advice, and recommendations to state and local agencies before an anticipated flood event. For example, the Corps may provide personnel to inspect existing flood control works to identify potential problems and solutions, evaluate conditions to determine the requirements for additional flood control protection, and recommend the most expedient construction methods; provide hydraulic, hydrologic, and geotechnical analysis; and provide information readily available at Corps districts to local entities for use in the preparation of local evacuation and contingency flood plans. 2. Direct assistance: providing supplies, equipment, and contracting for the construction of temporary and permanent flood control projects. Examples of emergency contracting work include the construction of temporary levees; the repair, strengthening, or temporary raising of levees or other flood control works; shore protection projects; and removal of stream obstructions, including channel dredging of federal projects to restore the design flow. Advance measures taken by the Corps are intended to supplement ongoing or planned state and local efforts, and are designed to deal with a specific threat. To implement advanced measures, the governor should make a written request to the Corps. The local sponsor for the advance measure assistance must agree to execute a cooperative agreement and, at no cost to the Corps, when the operation is over, remove all temporary work constructed by the Corps or agree to upgrade the work to standards acceptable to the Corps. In addition, the local sponsor is responsible for providing traditional items of local cooperation, such as lands, easements, rights-of-way, and disposal areas necessary for the work. Advance measures assistance is temporary and must be terminated no later than when the flood threat ends. Hazard mitigation activities are intended to help prevent or reduce the possibility of a disaster or reduce its damaging effects. The Corps is required to participate on a FEMA-led hazard mitigation team to identify postdisaster mitigation opportunities and establish a framework for recovery. According to the Corps’ hazard mitigation policy, division commanders are to appoint primary and alternate representatives to serve on the hazard mitigation team; establish procedures for quick and effective response to the requirements of the team; ensure essential information and data necessary to assess mitigation opportunities are available or capable of being obtained quickly; ensure division hazard mitigation team representatives are trained in flood hazard mitigation concepts and techniques; and provide reports to FEMA and Corps headquarters. Recommendations of the hazard mitigation team are intended to reduce or avoid federal expenditures resulting from flood situations. The Corps’ New Orleans District has a draft hurricane preparedness plan that defines the district’s role and responsibilities in the event of an emergency due to a hurricane. The plan outlines the essential functions of the district before, during, and after a hurricane. These functions include pre-event planning, organization, response, and recovery in order to minimize the potential hazards to life and property. As part of this plan, the district defines emergency organizational staffing to support emergency operations. Selected personnel are assigned to specific teams or offices that, in the event of a disaster, are to provide the necessary liaison with federal, state, or local emergency management agencies; make decisions relative to Corps’ capabilities and assignments; perform preliminary damage assessments; or accomplish specific missions. According to the plan, a New Orleans District Emergency Operations Center should be staffed to respond to an emergency, and the center is to become the focal point for collecting data, analyzing situations, allocating resources, furnishing reports to higher headquarters, and providing overall management and control of all district activities. With the activation of the emergency operations center, a crisis management team becomes responsible for coordinating and directing district activities in the crisis situation. A crisis action team is responsible for executing the activities as directed by the crisis management team. According to the plan, if a slow- moving category 3 or higher hurricane is approaching the area, the team should be activated and deployed at the direction of the commander. The plan does not contain any specific guidance on how the district would respond to a levee failure. In closing, Madam Chairman, the legislative and regulatory framework guiding the operations and maintenance of the levees divides this responsibility among a number of partners, depending upon specific circumstances. Similarly, the responsibilities for emergency preparedness and response are dependent on a variety of laws and regulations. As a result, the regulatory framework for these activities is complex and oftentimes unclear. Whether these responsibilities were appropriately fulfilled or played a role in the flooding of New Orleans in the wake of Hurricane Katrina in August 2005 is still to be determined. For further information on this testimony, please contact Anu Mittal at (202) 512-3841 or mittala@gao.gov. Individuals making contributions to this testimony included Ed Zadjura, Assistant Director; Allison Bawden; Kevin Bray; Kisha Clark; John Delicath; Doreen Feldman; Jessica Marfurt; Barbara Patterson; and Barbara Timmerman. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The greatest natural threat posed to the New Orleans area is from hurricane-induced storm surges, waves, and rainfalls. To protect the area from this threat, the U.S. Army Corps of Engineers (Corps) was authorized by Congress in 1965 to design and construct a system of levees as part of the Lake Pontchartrain and Vicinity, Louisiana Hurricane Protection Project. Although federally authorized, the project was a joint federal, state, and local effort. For the levees in the project, the Corps was responsible for design and construction, with the federal government paying 70 percent of the costs and state and local interests paying 30 percent. As requested, GAO is providing information on the (1) level of protection authorized by Congress for the Lake Pontchartrain project; (2) authorities, roles, and responsibilities of the Corps and local sponsors with respect to the operation, maintenance, repair, replacement, and rehabilitation of the levees; (3) procedures in place to ensure that responsible parties maintain the levees in accordance with the authorized protection level; (4) authorities, roles, and responsibilities of the Corps and local parties when levees fail or are damaged; and (5) plans, capabilities, and activities that have been developed by the Corps to ensure an adequate emergency response when levees fail. GAO is not making any recommendations at this time. The Corps is authorized to prepare for emergency response when levees fail by undertaking disaster preparedness, advance measures, and hazard mitigation activities. The Corps' New Orleans district has developed an all hazards emergency response plan for the New Orleans area. Congress authorized the Lake Pontchartrain project to protect the New Orleans area from flooding caused by storm surge or rainfall associated with a hurricane that had the chance of occurring once in 200 years. This was termed as the "standard project hurricane" and represented the most severe combination of meteorological conditions considered reasonable for the region. As hurricanes are currently characterized, the Corps" standard project hurricane approximately equals a fast-moving category 3 hurricane, according to the Corps. Agreements between the Corps and four New Orleans levee districts--the local sponsors for the Lake Pontchartrain project--specify that the local sponsors are responsible for operation, maintenance, repair, replacement, and rehabilitation of the levees after construction of the project, or a project unit, is complete. Pre-Katrina, according to the Corps, most of the levees included in the Lake Pontchartrain project had been completed and turned over to the local sponsors for operations and maintenance. The Corps has authority to repair or rehabilitate completed flood control projects if (1) deficiencies are related to the original construction or (2) damage is caused by a flood and the project is active in the Corps' Rehabilitation Inspection Program. According to internal Corps regulations, federal funds cannot be used for regular operations and maintenance activities. Both local sponsors and the Corps are required to conduct regular inspections to ensure that levees are properly maintained. If the Corps finds that local sponsors are not properly maintaining the levees, internal Corps regulations outline a series of steps, such as notifying the governor or taking legal action, that the Corps can take to bring the local sponsor in to compliance. Corps inspection reports for 2001-2004 indicate that the completed portions of the Lake Pontchartrain project were maintained at an acceptable level. When levees fail or are damaged, the Corps has authority to provide a variety of emergency response actions. Specifically, the Corps is authorized to undertake emergency operations and rehabilitation activities and, if tasked by the Federal Emergency Management Agency, to provide disaster response, recovery, and mitigation assistance to state and local governments, as needed. In addition, a Department of Defense manual assigns responsibilities, prescribes procedures, and provides guidance for responding to hazards. State and local roles and responsibilities when levees fail are similar to the Corps' responsibilities and are described in federal regulations. The Corps is authorized to prepare for emergency response when levees fail by undertaking disaster preparedness, advance measures, and hazard mitigation activities. The Corps' New Orleans district has developed an all hazards emergency response plan for the New Orleans area.
Multilateral export control regimes are a key policy instrument in the overall U.S. strategy to combat the proliferation of weapons of mass destruction and conventional weapons. Current U.S. policy calls for enhanced multilateral cooperation of all key policy instruments— international treaties, multilateral export control regimes, export controls, and security assistance to other countries—in the war against terrorism and the proliferation of weapons of mass destruction. The multilateral export control regimes are voluntary, nonbinding arrangements among like-minded supplier countries that aim to prevent the spread of WMD and missile technology and equipment by restricting trade in sensitive technologies to peaceful purposes. While countries make no legally binding commitments in joining them, participating countries undertake a political commitment to abide by the goals and principles of the regime. The regimes operate on the basis of consensus of all members and decisions on how to implement and interpret regime decisions are left to the national discretion of each member. The Australia Group, the MTCR, and the Nuclear Suppliers Group focus on trade related to WMD and their delivery systems and are referred to as WMD regimes; the Wassenaar Arrangement focuses on trade in conventional weapons and related dual-use items. Specifically, the Nuclear Suppliers Group and the Australia Group seek to ensure that trade in controlled items does not contribute to nuclear or to chemical or biological weapons proliferation (see table 1). The MTCR seeks to limit the spread of missile-related equipment and technology. The Wassenaar Arrangement aims to contribute to international security and stability by promoting greater responsibility and transparency in arms and sensitive dual-use goods and technology transfers. None of the regimes identify specific countries as targets. Collectively, however, the regimes strive to stop, slow, or increase the cost and risk of detection efforts by countries’ of concern to acquire sensitive technologies and capabilities. As highlighted in table 1, three of the regimes were created in response to major proliferation events. The Nuclear Suppliers Group was established in 1975 after India—a nonnuclear weapons state—tested a nuclear explosive device in 1974 and was strengthened after the1991 Gulf War and revelations of Iraq’s efforts to develop weapons of mass destruction. The Australia Group was established in 1985 as a response to the use of chemical weapons in the Iran-Iraq War, and the MTCR was established in 1987 in response to missile developments in the 1970s and 1980s. The Wassenaar Arrangement, in contrast, was created in 1996 after the dissolution of its Cold War predecessor to include conventional technologies not covered by the other regimes. The regimes also share overlapping memberships of between 33 to 40 states that are generally suppliers of sensitive technologies. All regimes except the Wassenaar Arrangement have added new members in recent years. Specifically, 28 states are members of all 4 regimes. Although China is a major supplier, it is not a member of any of these regimes but has declared its commitment to abide by the original 1987 guidelines and parameters of the MTCR. In addition, China has joined a multilateral nuclear export control group called the Zangger Committee. See appendix II for a list of the members of each regime. All the regimes have discussed ways to address terrorism since September 11, 2001, and are still considering what more to do. For example, the Australia Group added counterrorism as an official purpose of the regime and added a number of items to its control list in an effort to control the types of items that terrorists, rather than states, would seek to develop chemical or biological weapons. These items included toxins, biological equipment, and the transfer of knowledge. The Wassenaar Arrangement amended its guidelines to add language exhorting its members to continue to prevent the acquisition of conventional arms and technologies by terrorists. The Nuclear Suppliers Group is considering proposals to provide more guidance to governments for reviewing export licenses for terrorism-related concerns. MTCR members in September 2002 announced that they will further study how possible changes to the MTCR guidelines and control list may contribute to limiting the risk of controlled items and their technology falling into the hands of terrorists. Nonproliferation experts credit the Australia Group, the MTCR, the Nuclear Suppliers Group, and the Wassenaar Arrangement with several accomplishments. These include helping set international standards for limiting exports of sensitive items and helping stem proliferation in particular countries of concern. Because the multilateral export control regimes are only one of several policy tools that national governments use to combat the proliferation of weapons of mass destruction and advanced conventional weapons, it is difficult to attribute accomplishments exclusively to the regimes. Each regime has helped set international standards for how countries should control exports of sensitive technology. In 1978, the Nuclear Suppliers Group published the first guidelines governing exports of nuclear materials and equipment. These guidelines established several requirements for the members to apply, including the application of International Atomic Energy Agency safeguards at facilities using controlled nuclear-related items. Subsequently, in 1992, the Nuclear Suppliers Group broadened its guidelines by requiring that members insist on full-scope safeguards as a condition of supply for their nuclear exports. Full-scope safeguards require a country to have an agreement with the International Atomic Energy Agency to apply inspection and monitoring procedures for all nuclear facilities in a country, not only those receiving a particular nuclear item from a supplier. The Nuclear Suppliers Group, in the aftermath of the Persian Gulf War and revelations of Iraq’s nuclear weapons development program, also created a dual-use control regime, which established new controls for items with nuclear and nonnuclear uses that do not trigger a requirement for international safeguards when exported. In 1985, the Australia Group convened its first meeting to begin coordinating national policies aimed at restricting the proliferation of chemical weapons and related dual-use items. In addition, in June 2002, the Australia Group adopted a provision in its new guidelines for licensing sensitive chemical and biological items that made it the only regime to require its members to adopt “catch-all” controls. “Catch-all” controls authorize a government to require an export license for items that are not on control lists but that could contribute to a WMD proliferation program if exported. Furthermore, the Australia Group added controls on technology associated with dual-use biological equipment, as well as controls on the intangible transfer of information and knowledge that could be used for chemical and biological weapons purposes. In 1987, the MTCR established guidelines and a control list of items as the first international standard for responsible missile-related exports, according to Department of State officials. In addition, from 1999 to 2001, MTCR developed an International Code of Conduct intended to create a voluntary political commitment, open to all countries, against ballistic missile proliferation. The code—scheduled to be launched by the Netherlands on behalf of the European Union—is to consist of a set of broad principles, general commitments, and modest confidence-building measures and is intended to supplement the MTCR. In 1996, the Wassenaar Arrangement was successfully established to succeed the Coordinating Committee for Multilateral Strategic Export Controls despite the opposition of some countries, according to nonproliferation specialists. One notable accomplishment of the Wassenaar Arrangement is the successful development of an agreement among its members for guidelines on shoulder-fired missiles, such as the Stinger, according to State Department officials. Although the former head of the Wassenaar Secretariat stated that the achievements of the Wassenaar Arrangement are limited and that “there have been no spectacular results,” he stated that the situation would be worse without the Arrangement. The export control regimes have helped stop, slow, or raise costs to countries of concern of WMD, according to nonproliferation experts. For example, the MTCR helped reduce the number of countries with ballistic missile programs, according to Department of State officials. Specifically, the MTCR contributed to ending sensitive ballistic missile programs in a number of countries, including Argentina, Brazil, Egypt, South Africa, and Taiwan. The MTCR also may have helped slow missile development in India, Iran, Israel, North Korea, and Pakistan, whose missile programs might have been further along or more advanced in the absence of the regimes, according to nonproliferation experts. Similarly, the Nuclear Suppliers Group helped convince Argentina and Brazil to accept full-scope safeguards on their nuclear programs and end nuclear activities without safeguards in exchange for expanded access to international cooperation for peaceful nuclear purposes. Regimes generally have helped raise the costs to proliferators of acquiring sensitive technologies, according to nonproliferation experts. They have induced most major suppliers to responsibly control their exports and have significantly reduced the availability of technology and equipment available to programs of concern, according to a Department of State official. Moreover, regime members have made it more difficult, more costly, and more time consuming for proliferators to obtain the expertise and material needed to advance their programs. The regimes’ efforts have caused delays, forced proliferators to use elaborate procurement networks, and forced them to rely on older, less effective technology, according to the official. For example, the Australia Group may have raised the cost of attaining an offensive chemical weapons capability by eliminating some sources of supply, according to nonproliferation experts and regime public statements. They noted that, as a result, some countries of concern have stopped pursuing the acquisition of chemical weapons. We identified several significant weaknesses in the activities of the regimes that could limit their ability to curb proliferation. Specifically, we found that regime members do not (1) share complete and timely export licensing information or (2) harmonize their export controls promptly to accord with regime decisions. We found deficiencies in the sharing of export licensing information between regime members. These deficiencies could hamper the ability of regime members to factor key information about potential proliferators into their export licensing decisions. For example, we found that regime members may not always share complete information in reporting export denials to the regimes. In addition, reporting information on export denials for Wassenaar Regime members is generally slow. Other regimes have not set deadlines for their members to report such information and cannot determine how long it takes members to report. Furthermore, three regimes do not collect export information that would enable members to consult with each other before approving licenses for exports that other members have denied. Members lack such information because most regimes do not expect members to report approvals of export licenses. Finally, only two regimes use electronic data sharing systems to post and retrieve data. As a result, we found significant delays in regime members’ ability to access information quickly for those regimes lacking this capability. All four regimes expect members to report denials of export licenses for controlled dual-use items. By sharing information about the licenses it has denied, a regime member helps other members avoid inadvertently undercutting its export licensing decisions and provides regime members with more complete information for reviewing questionable export license applications. Appendix III describes the export denial reporting procedures for each regime. Despite the expectation to report export denials, the United States did not notify the Australia Group between 1996 and 2002 that the U.S. government denied 27 licenses to export Australia Group-controlled items to such countries as China, India, and Syria. Fifteen of these licenses involved chemicals that could be used for precursors for toxic chemical agents and the remaining licenses involved other chemical or biological equipment and technology. In contrast, the United States reported multiple denials to each of the other regimes in the same period (see fig. 1). The Department of State said that the United States was not required to report these denials to the Australia Group because the U.S. government denied them for reasons other than chemical and biological weapons nonproliferation purposes. However, officials of the Australia Group Secretariat disagreed with this assertion. They stated that Australia Group members should notify the Australia Group Chair whenever they deny licenses to export Australia Group-controlled items, including those controlled under another regime. Reporting such denials, they stated, would help the Australia Group maintain its effectiveness, ensure that other members’ denials are not undercut, monitor and analyze export trends, and promote compliance with regime commitments. Furthermore, in its technical comments on this report, the Department of State agreed that sharing information about export licenses is a valuable element of information-sharing efforts, but State could not explain why it did not share these 27 denials under the regime’s broader information exchange activities. We found that member states may not be providing complete information regarding their export denials. We were unable to definitively establish the reasons why other nations have not reported denials because we do not have access to their export licensing data. However, our analysis of the denial reporting data available to us also reveals that a significant percentage of each regime’s membership has never reported any denials. We found that the percentage of members in each regime that have never reported export denials ranged from 45 percent in one regime to 65 percent in another. U.S. and foreign officials could not explain why some regime members have never reported any denials. Although a 2000 analysis of one regime’s denial reporting recommended an evaluation to determine why members were submitting few denial notifications, we saw no evidence that the regime had conducted such an analysis. They speculated that some members do not do so because they (1) do not receive many export license applications for controlled items or (2) have not denied any applications. Also, several countries, including Australia, France, and Japan, informally discourage exporters from applying for licenses that those governments believe they likely would deny, according to U.S., foreign government, and regime officials. Because such “informal denials” are not reported to the regimes, they do not alert other regime members that a potential country of concern may be seeking an item. When denial notifications are aggregated for all regimes, three countries accounted for 66 percent of all denial notifications. The United States, relative to other regime members, has reported a large percentage of export denials to each of the regimes. Figure 2 shows the percentage of denial notifications by country aggregated for all the regimes. All four regimes generally expect members to report denials of export licenses for controlled dual-use items in a timely fashion. Prompt export denial reporting can help ensure that a country of concern cannot “shop around” after being denied a license by a regime member. According to the chair of one regime, even a month’s delay in sharing such information would provide a country of concern with more than enough time to shop around for another source of a sensitive item. The Wassenaar Arrangement is the only regime to have set deadlines for its members’ denial reporting (see app. III), but reporting by members is slow. Members are expected to report denials of the more sensitive dual- use items on its control lists no later than 60 days after the date of the denial; denial notices for less sensitive items—over 75 percent of dual-use items on Wassenaar control lists—are expected to be reported in an aggregated format every 6 months. We found that the Wassenaar Arrangement’s members submit these denial notifications on schedule only about 36 percent of the time. However, the Wassenaaar Arrangement Secretariat stated that a valid picture of denial or other notifications can be gained only when all the notifications are entered into the database, an action that is still in process. The Secretariat noted that any analysis done on the notifications before this milestone has been achieved would be flawed and open to later revision once the data is entered into the database correctly in early 2003. U.S. government officials said that one reason that U.S. denial reporting to regimes may not be timely is because the U.S. government does not report export denials until after an exporter completes or foregoes an appeal of the denial. In response to our enquiries, the officials from the Department of Commerce’s Bureau of Industry and Security recommended to the Department of State in August 2002 that the United States report all denials to the appropriate regime at the time that the exporter is first officially notified of the intent to deny the license application. In comments on a draft of this report, the Department of State said that it proposed to the Department of Commerce that it either should seek the exporter’s agreement to forego appeals or that the U.S. government should circulate a “denial on inquiry” notification to the regime until the export application is final. Other regimes have not set deadlines for reporting and, furthermore, cannot determine the amount of time that elapses between the date a government makes a denial and reports it to the regime, thus undermining the value of the reporting system. We could not determine the time it takes for Australia Group or Nuclear Suppliers Group members to report export denials because their members do not report dates of export denials uniformly. For example, a Nuclear Suppliers Group member may report its denial “notification date” as either (1) the actual date that it denied the export or (2) the date it transmitted the denial to the regime. Similarly, we could not determine precise MTCR denial reporting times because the MTCR export denial data maintained by the Department of State records only the month of the denial. U.S. and foreign government officials agreed that denial reporting for the regimes needs to be more timely to improve regime effectiveness. Access to information on a member’s decisions to approve exports to nonmembers would help other regime members identify possible proliferation patterns and determine whether specific exports had undercut any of their license denials. However, only one regime, the Wassenaar Arrangement, expects its members to share information on approved export licenses. Because the Wassenaar Arrangement aims to prevent destabilizing accumulations of weapons and sensitive dual-use technologies in regions around the world, it gathers information about approved dual-use exports for items on its more sensitive control lists and about transfers of conventional weapons. However, according to U.S. officials, the Wassenaar Arrangement gathers this information only once every 6 months and aggregates it to a degree that it cannot be used constructively to identify (1) undercuts of license denials, (2) items approved and transferred, and (3) recipients of the items. Consistent with this theme, we reported in April 2002 that approval reporting of certain semiconductor manufacturing equipment lacks enough detail to reveal the equipment’s capabilities or intended end use and is of little practical use for determining the semiconductor manufacturing capability of the country to which the equipment is exported. The Australia Group, the MTCR, and the Nuclear Suppliers Group each have a formal “no undercut” policy. This policy sets an expectation that whenever a member reports an export denial to a regime, no other member will export a similar item without first consulting with the member who denied it. However, these regimes do not share information on the licenses that they approve, making it difficult to assess whether the “no undercut” expectation is being met. To address this weakness, the United States proposed in May 2002 that the Nuclear Suppliers Group require its members to begin reporting approval information. Department of State officials said members discussed the feasibility of this proposal in September 2002, but could not say if or when this proposal would be implemented because of members’ concerns about reporting proprietary information. One Department of State official said that the regimes do not need to share this information to identify undercuts because the members are “self-policing” and their adherence to the “no undercut” policy is based on trust. Two regimes, the Nuclear Suppliers Group and the Wassenaar Arrangement, have established electronic information systems for nearly instant, world-wide communications that can help to improve the timeliness and quality of information sharing, especially export denial reporting. The Nuclear Suppliers Group Information Sharing System (NISS) was originally set up around 1993, according to a Los Alamos National Laboratory official. The Wassenaar Arrangement Information System (WAIS), operational for most members since January 2002, allows participating countries to post export denial notices almost as soon as the participating government issues the denial. The Australia Group has investigated setting up its own system and, in 2001, inquired about the NISS. However, it has not made a commitment to move to an electronic information and document management system. Department of Commerce officials stated that the U.S. government has some concerns about the security of information on an electronic system for this regime and the MTCR since much of the data to be shared would be classified. As shown in table 2, the average time for regimes to distribute export denials, once received from their members, ranges from as little as 2 days for the Nuclear Suppliers Group to as much as 30 days for the MTCR. The members of both the Nuclear Suppliers Group and Wassenaar Arrangement have the capability to post their denial notices as soon as member governments officially issue the denials. We observed significant differences in timeliness of report distribution to memberships and data retrieval among the regimes using electronic information systems and those not using them. State Department officials retrieved documents and export denial notifications that we requested from the NISS and the WAIS electronic systems in minutes. In contrast, State officials provided us with the same type of information for the MTCR and the Australia Group 6 months after we requested it. State officials said that this took so long because they had to manually search drawers of paper files and that new staff could not readily find documents filed by staff who were on leave. In addition, Department of State and Energy officials showed us how they could search the NISS in various ways to identify patterns of proliferators and evidence of countries of concern shopping for controlled items among several regime members. The electronic information systems also provide more uniform data. Before the WAIS, the use of paper systems meant that denial reports arrived at the Wassenaar Arrangement Secretariat in a variety of formats, with individual data fields often presented in noncomparable ways among members, according to government and Secretariat officials. Member countries are more likely to provide uniform and comparable data that can be more easily analyzed because the electronic forms have reporting fields that must be filled in correctly before submission. Harmonization, a goal shared by each regime, refers to efforts by regime members to review and agree upon common control lists of sensitive items and technologies and approaches to control them. (See app. IV for a description of the control lists developed by each regime and examples of the items on each list.) However, several factors undermine this goal. First, regime members may control an item differently because some members take significantly longer than others to adopt agreed-upon regime changes into their national laws or regulations. In addition, only one regime tracks whether its members have adopted regime control list changes; none of the regimes tracks when these changes are implemented. Second, in some cases, there are significant differences in how members implement the same export controls that may reduce the effectiveness of common nonproliferation efforts. Finally, export controls cannot be applied consistently until countries joining regimes have effective export control systems in place. According to the U.S. government, at least three countries—Argentina, Belarus, and Russia—did not have effective control systems in place when they became members of certain regimes. Each regime member is expected to adopt and implement control list changes consistently, subject to its national discretion. If agreed-upon changes to control lists are not adopted at the same time, proliferators could exploit these time lags to obtain sensitive technologies by focusing on regime members that are slowest to incorporate the changes. Only the Australia Group attempts to identify if members adopted the most recently agreed-upon controls in their domestic regulations and laws, although it does not track the dates that members do so. Based on our analysis, we found some significant differences among members in the time taken to adopt agreed-upon control list changes into their national laws or regulations. In the case of the Wassenaar Arrangement, the European Union adopted December 2000 plenary changes within 3 months, whereas the United States did not adopt all these changes into export regulations until 15 months later (March 2002.) In comparison, the European Unionadopted Nuclear Suppliers Group plenary changes within a year of the Nuclear Suppliers Group plenary, and Japan adopted regulations for all regime changes within 6 months. Department of Commerce officials explained that the U.S. regulatory process is more comprehensive and thorough than that of some other regime members, thus requiring a longer time for the United States to adopt regime changes. Other regime members adopt the texts of regime control changes verbatim, while the United States also explains in its regulations the purpose behind the regulatory change and how it will affect the exporter, according to the officials. Once regime members have adopted similar changes to export control lists or practices, these changes can be undermined by variations in how member states implement them. The Assistant Secretary of Commerce for Export Administration emphasized the importance of minimizing these differences when he said in October 2001 that member countries implement agreed-upon control lists differently with a substantial degree of national discretion. For example, the United States has said that its export controls on high-performance computers, which use a measure of computer performance to indicate when an export license is required, are consistent with those of Wassenaar Arrangement controls. Both the U.S. and Wassenaar Arrangement control thresholds are set at 28,000 millions of theoretical operations per second (MTOPS); computers above this level would require a license for export. However, the United States also maintains a “license exception” to this threshold. In January 2002, the President announced that this control threshold would increase from 85,000 MTOPS to 190,000 MTOPS; only computers above this higher threshold to be exported to countries such as China, India, and Russia would require a license. As a result of this practice and of U.S. resistance to members’ efforts to remove or revise the current performance measure for computers, several Wassenaar members have accused the United States of unilateral action at odds with regime harmonization goals. Department of State officials expressed concern that continued U.S. resistance without adequate justification would cause some countries to unilaterally remove items from their national control lists. According to the Department of Commerce, the United States and the other Wassenaar Arrangement members agreed to raise computer control levels from 28,000 to 190,000 MTOPS at a September 2002 Wassenaar meeting, 8 months after the United States had changed its license exception control level. Differences in how members implement agreed-upon export controls may become an issue for the Australia Group as well. The Australia Group’s June 2002 Plenary agreed to require its members to adopt “catch-all” controls—controls that authorize a government to require an export license for items that are not on control lists but that could contribute to a WMD proliferation program if exported—and to make this requirement an attachment to its new guidelines. The United States has encouraged countries to adopt catch-all controls as a way of strengthening nonproliferation efforts. However, while most members of the WMD regimes have adopted catch-all controls, significant differences over how members implement them raise questions about their effectiveness in stopping proliferation. For example, under some countries’ catch-all controls, the government must show that an exporter had absolute knowledge that an export would support a WMD proliferation activity to require a license or to prosecute a violation of law. Under other countries’ catch-all controls, such as those of the United States, the government needs to show only that an exporter knew or suspected that an export would support a WMD proliferation activity. A 2001 Department of Commerce report affirmed that different countries’ standards complicate law enforcement cooperation, and Commerce noted that even the United States faces challenges in enforcing catch-all controls on dual-use goods because it is difficult to detect, investigate, and prosecute cases under the U.S. catch-all provision standard. Regimes consider the implementation of an effective national export control system a criterion for a country’s membership eligibility but in three cases have admitted members that did not meet this criterion. (See app. V for some factors to consider when evaluating a prospective member to each regime.) Without an effective export control system, members cannot ensure that they are implementing agreed-upon controls consistently. While regime bodies, such as the chair or secretariat, do not evaluate the export control systems of prospective members, individual members, including the United States, have done so for each prospective member. Russia, Argentina, and Belarus did not have effective export control systems in place at the time of their admission to regimes, according to U.S. government officials and documents. Russia does not yet have an effective export control system in place, according to U.S. government officials, even though it is a member of three regimes. The Soviet Union, Russia’s predecessor, was a founding member of the Nuclear Suppliers Group. Russia also joined the Wassenaar Arrangement when it was established in 1996. In June 2002, the Assistant Secretary of State for Nonproliferation stated that Russia’s implementation and enforcement of its export controls remain a cause of concern. An unclassified January 2002 report by the Director of Central Intelligence stated that passing export control legislation will have little impact on key weaknesses of the Russian export control system, such as weak enforcement and insufficient penalties for violations. According to some U.S. and foreign government officials, it is better to have certain countries such as Russia in the regimes in order to influence their export controls and behavior or for other foreign policy reasons. Argentina did not have in place an effective export control system when it joined the Wassenaar Arrangement in 1996. Recognizing that Argentina did not have export controls over dual-use items and had not adopted the Wassenaar Arrangement control list as late as 1999, the United States urged Argentina to pass appropriate legislation. Argentina eventually passed legislation to adopt dual-use export controls, which went into effect in June 2000. Belarus had export controls in place but was not adequately enforcing them when it became a member of the Nuclear Suppliers Group in fiscal year 2000, according to the Department State. However, the Department of State noted that at the time Belarus joined that regime, State still had concerns that Belarus was not adequately enforcing certain conventional arms-related controls. Regime members sometimes accept or reject a particular country’s membership for political reasons, according to U.S. and foreign government officials. The U.S. government faces a number of interrelated obstacles in trying to strengthen the multilateral export control regimes. First, and most significant, efforts to strengthen the regimes have been hampered by a requirement to reach consensus among all members about every decision made and by the inability to enforce compliance with commitments in arrangements that are voluntary and nonbinding. Second, the rapid pace of technological change and the growing trade of sensitive items among WMD proliferators complicates efforts to harmonize export controls and keep control lists current. Third, the U.S. government has no specified or agreed-upon criteria for assessing regimes’ effectiveness. U.S. and foreign government officials and nonproliferation experts all stressed that the regimes are consensus-based organizations and depend on the like-mindedness or cohesion of their members to be effective. However, regimes have found it especially difficult to reach consensus on such issues as making changes to procedures and control lists and identifying countries to be targets of the regimes. In addition, many U.S. and foreign government officials said that members’ compliance with regime commitments cannot be enforced because the multilateral export control regimes are voluntary, nonbinding groups. A single member’s objection can stalemate a regime decision. For example, Russia has impeded consensus on several issues in the three regimes to which it belongs—MTCR, Nuclear Suppliers Group, and the Wassenaar Arrangement—according to several nonproliferation experts. These issues included broadening information in denial notifications and obtaining greater transparency into deliveries of small arms and light weapons. One government stated that it is easier to reach consensus in the Australia Group because Russia is not a member. On the other hand, State and Commerce Department officials said that the need for consensus- based decision-making can work to the U.S. advantage because it prevents a regime from adopting proposals that the United States might oppose. The regimes also have found it difficult to reach consensus on designating countries that could be targets of the regimes and, therefore, would not receive exports listed on the regimes’ control lists. Some members support the idea of designating target countries and have proposed countries to be named, while other members disagree. For example, repeated efforts by Wassenaar Arrangement members to identify specific countries of concern or even regions of unrest have failed because of a lack of consensus. Instead, each regime member determines which countries are of concern to it when implementing its national export controls. Nonetheless, according to the Department of State, there is broad agreement that states whose behavior is a cause for serious concern—Iran, Iraq, Libya, and North Korea—will be dealt with firmly by Wassenaar members. As an alternative to designating regime targets, the Nuclear Suppliers Group has established conditions for supply of nuclear and nuclear- related, dual-use items. For example, members of the regime have agreed to supply nuclear equipment and material only to countries that have in place a full scope safeguards agreement with the International Atomic Energy Agency for all facilities in the country and only upon assurances that adequate physical protection will be maintained on the supplied items. Thus, countries that do not meet these conditions in effect become targets of the regime. The Under Secretary of State for Arms Control and International Security stated in May 2002 that U.S. nonproliferation policy goals are to stop the development of WMD and ensure compliance with existing arms control and nonproliferation treaties and commitments. Noncompliance can undermine the efficacy and legitimacy of these regimes, according to the Under Secretary. However, the regimes do not have their own means to monitor and enforce members’ adherence to regime commitments. Instead, they rely on diplomatic pressure to influence compliance or the occasional intelligence information from member states to identify activities that might be inconsistent with regime commitments. According to the Department of State, in the most clear and serious example of a violation of regime nonproliferation commitments, Russia shipped nuclear fuel to the Tarapur power reactors in India in January 2001. As a Nuclear Suppliers Group member, Russia is committed to refraining from nuclear cooperation with any country that lacks comprehensive International Atomic Energy Agency safeguards on all its nuclear facilities. India, which has a nuclear weapons program, does not have such safeguards on all its facilities, although it does have safeguards on the Tarapur reactors. Although Russia justified the fuel supply to Tarapur based on a safety exemption to this commitment, 32 of 34 Nuclear Suppliers Group members declared at a special meeting in December 2000 that this shipment would be inconsistent with Russia’s commitments to the Nuclear Suppliers Group. The fuel transfer occurred, nonetheless. Several countries and the European Union sent demarches (diplomatic notes) to Russia protesting the sale. The Department of State issued a February 2001 public statement that “condemned Russia’s disregard of its Nuclear Supplier Group commitments and urged Russia to live up to its nonproliferation obligations.” Based on publicly available information, we found examples of other questionable exports by Russia involving nuclear exports to Iran and missile technology exports to Iran, India, China, and Libya. While these cases were more ambiguous than the Tarapur case, they also raise concerns over Russia’s compliance with its commitments. In addition, the Department of State has provided at least 34 demarches to 11 other members of the regimes from 1998 to 2002, questioning whether their proposed exports were consistent with regime commitments. Several U.S. and foreign government officials said that members’ compliance with regime commitments cannot be enforced for several reasons. First, according to the Department of State, it is difficult to apply the concept of enforcement to informal political commitments, such as the export control regimes. Second, members’ commitments to the regimes are sometimes vague or left to the interpretation of each member state. Third, officials of several governments stated that it is difficult to identify when a foreign government is not complying with its commitments because knowing whether an illicit technology transfer occurred with or without prior government knowledge is sometimes impossible. Fourth, it is difficult to encourage countries to comply with their regime commitments because there is disagreement over which states are countries of concern, according to some foreign government officials. The rapid pace of technological change in a globalized world economy complicates efforts to keep control lists current because these lists need to be updated more frequently. The current world economy is characterized by rapid technological innovation, globalization of business, and the internationalization of the industrial base, according to a 2001 study. The globalization of defense and commercial production activities has made advanced military capabilities and related commercial goods and technologies more widely available to many countries or subnational groups. This has narrowed the technology gap between the United States and other nations. Rapidly evolving technologies have particularly impacted such areas as high-performance computers, semiconductor manufacturing, and information technologies. Several industry representatives and U.S. and foreign government officials said that legislative or regulatory changes modifying or removing items from control lists that no longer can be effectively controlled cannot keep pace with rapid technology changes. As a result, the Wassenaar Arrangement, which seeks to control items in these technologies, has experienced prolonged discussion and disagreements over how or even whether to maintain such items as high performance computers on its control lists. In addition, MTCR members have disagreed on revising parameters of items to control, such as cruise missiles and unmanned aerial vehicles, allowing some members to seek controversial cruise missile sales to nonmembers. In addition, the trade of controlled items among nonmember countries with indigenous WMD programs undermines regime efforts to effectively restrict the exports of sensitive goods and technology. Government officials of each of the regimes expressed their concern over “secondary proliferation,” the growing capability of proliferators to develop WMD technologies and trade them with other countries of concern. Traditional recipients of WMD and missile technology such as India, Pakistan, North Korea, and Iran could emerge as new suppliers of technology and expertise to countries of concern, according to an unclassified 2002 report by the Director of Central Intelligence. They are not members of multilateral export control regimes and do not adhere to their standards. For example, North Korea has exported significant ballistic missile-related equipment, components, materials, and technical expertise to countries in South Asia, North Africa, and the Middle East, including Iran. In August 2002, the Under Secretary of State for Arms Control and International Security called North Korea “the world’s foremost peddler of ballistic missile-related equipment, components, materials, and technical expertise.” To counter this trend, officials of some regime member states expressed a desire to have all supplier countries join the regimes to encourage them to conform to regime standards and limit the proliferation of sensitive technologies. Other officials recognized, however, that such countries would not satisfy membership criteria and would run the risk that the cohesiveness of like-minded memberships would be eroded. Neither the U.S. government, member governments in the regimes whom we contacted, nor the regimes have established explicit criteria for assessing the regimes’ effectiveness. Nonetheless, the U.S. government has an established policy of strengthening the effectiveness of the multilateral export control regimes. Various U.S. government officials, including the President and the under secretaries and assistant secretaries of State and Commerce have stated the policy in public speeches or in written testimony before Congress. Furthermore, while neither these governments nor regimes made any evaluation of the regimes’ effectiveness, they have asserted that the regimes are effective. The importance of developing criteria to assess regime effectiveness is underscored by the Export Administration Act of 2001. Pending before the Congress at the time of this report, this act would require monitoring of and annual reporting on the regimes’ effectiveness. Some U.S. and foreign government officials noted several possible limitations to an effort to assess the effectiveness of the regimes. First, multilateral export control regimes could not be assessed separately from the entire nonproliferation system, including national export enforcement systems and treaties. Second, demonstrating the effectiveness of the regimes would depend on being able to prove that the international community would be worse off without the regimes than with them. Third, several government officials and industry representatives noted that the mission, obligations, and political commitment of the Wassenaar Arrangement are not as clear as those of the other regimes. Thus, assessing the effectiveness of this regime would be especially problematic. Notwithstanding these possible limitations to an effort to assess the effectiveness of the regimes, some foreign and U.S. government officials have proposed criteria to do so. The proposed criteria include the following: clarity of each regime’s mission, obligations, and political commitment; quality, quantity, and timeliness of regime information exchanged, strength of no-undercut provisions; willingness and ability of the regime to adapt its practices and common control lists to deal with new proliferation challenges; number of participants and level of their participation; level of compliance with regime standards; existence of guidelines for licensing and enforcement; and criticism from nonmembers—specifically proliferators—as evidence of a regime’s effectiveness. Strengthening multilateral export control regimes would help them better meet the U.S. national security objective of preventing the proliferation of weapons of mass destruction and conventional weapons to countries of concern and terrorists. A key function of each regime is sharing information related to proliferation. Yet the regimes often lack even the basic information that would allow them to assess whether their actions were working as intended. The regimes cannot effectively limit or monitor efforts by proliferators to acquire sensitive technology without more complete and timely reporting of licensing information and without more information on when and how members adopt and implement agreed-upon controls. Addressing these deficiencies would enhance the regimes’ ability to accomplish their nonproliferation goals. However, the consensus-based and voluntary nature of these regimes poses organizational and political obstacles to implementing needed reforms. In addition, the lack of explicit criteria to assess regime effectiveness will make it difficult to determine the success of any effort to strengthen the regimes. While the regimes have adapted to changing threats or conditions in the past, their continued ability to do so may determine whether the regimes remain viable in curbing proliferation in the future. However, the United States lacks a coherent strategy to address the regimes’ common weaknesses and overcome the organizational and political obstacles to strengthening their effectiveness. To help the multilateral export control regimes achieve their stated goals and objectives, we recommend that the Secretary of State establish a strategy to work with other regime members to enhance the effectiveness of the multilateral export control regimes. This strategy should identify steps regime members should take to (1) improve information-sharing by establishing clearly defined standards for reporting export denials on a more complete and timely basis; sharing greater and more detailed information on approved exports of sensitive transfers to nonmember countries; and adopting automated information-sharing systems in the MTCR and Australia Group to facilitate more timely information exchanges. (2) adopt and implement agreed-upon regime changes to export controls more consistently by setting guidelines for when each regime member should adopt control list changes into national laws and regulations and making this information available to all members; tracking when members adopt regime changes into national law and regulations and making information on the timing and content of these changes available to the membership; establishing minimal standards for an effective national export control periodically assessing each member’s national export control system against these standards and reporting the results of these assessments to the regime; (3) identify potential changes in policies and procedures by assessing alternative processes for reaching decisions, evaluating means for encouraging greater adherence to regime conducting an annual self-assessment of regime effectiveness. To ensure that the United States is reporting all relevant information to the multilateral export control regimes, as expected, we recommend that the Secretary of State report U.S. denials of all export licenses for items controlled by a multilateral export control regime at the time the exporter is informed of the U.S. government’s intent to deny an export license. To enable the U.S. government to better implement its policy of strengthening the effectiveness of the multilateral export control regimes, we also recommend that the Secretary of State establish criteria to assess the effectiveness of the multilateral export control regimes. We provided a draft of this report to the Secretaries of Commerce, Defense, Energy, and State for their review and comment. We received written comments from the Departments of Commerce, Energy, and State that are reprinted in appendixes VI, VII, and VIII. The Department of Defense declined to provide us with written comments. The Department of State also provided us with technical comments, which we incorporated as appropriate. The Department of Commerce agreed with our findings, conclusions, and recommendations. Commerce agreed that strengthening the multilateral export control regimes would serve U.S. national security objectives. In its written comments, the Department of Energy indicated that it had no comments on the report. The Department of State said that it will give due regard to our recommendation to work with other regime members to establish a strategy for enhancing the effectiveness of the multilateral export control regimes. State also agreed with our conclusion that information sharing of export licensing is an important element of regime activity. However, State asserted that our report overall did not reveal any shortcomings of nonproliferation significance. In fact, our report highlighted the inability of the regimes to enforce Russia’s compliance with its regime commitments, a matter of major nonproliferation significance. Our report also identified several specific weaknesses in the processes the regimes use to share information about each other’s licensing decisions and to implement regime decisions. Weaknesses in regime processes undermine the regimes’ effectiveness in meeting nonproliferation purposes. We are sending copies of this report to appropriate congressional committees and to the Secretary of Commerce, Secretary of Defense, Secretary of Energy, and Secretary of State. Copies will be made available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-8979 if you or your staff has any questions concerning this report. A GAO contact and staff acknowledgments are listed in appendix IX. To describe accomplishments of the multilateral export control regimes, we reviewed analyses and documents prepared by the Departments of State, Commerce, Defense, the intelligence community, and nonproliferation specialists in academia. We also reviewed the database of the Monterey Institute for International Studies. Also, we reviewed plenary, working group, and information exchange documents of the Australia Group, MTCR, Nuclear Suppliers Group, and Wassenaar Arrangement. We met with officials of the Departments of State, Commerce, Defense, and Energy, and the intelligence community in Washington, D.C.; the Department of Energy’s Los Alamos National Laboratory in Los Alamos, New Mexico; and the Center for Nonproliferation Studies of the Monterey Institute for International Studies in Monterey, California. We also met with officials of the governments of Australia, Austria, Canada, France, Japan, the Netherlands, New Zealand, and the United Kingdom. In addition, we received written responses to questions we provided to the governments of Canada, Japan, Germany, Russia, and Hong Kong. Also, we met with representatives of the points of contact for the MTCR in Paris, France; and the Nuclear Suppliers Group in Vienna, Austria; the Secretariats of the Australia Group in Canberra, Australia; and of the Wassenaar Arrangement, including the Director General, in Vienna, Austria. Also, we interviewed representatives of American companies from the Alliance for Network Security, American Electronics Association, Association for Manufacturing Technology, American Chemistry Council, and Nuclear Energy Institute. We also met with representatives of the International Atomic Energy Agency and the Zangger Committee in Vienna, Austria; and of the Organization for the Prohibition of Chemical Weapons in The Hague, The Netherlands, to identify the relationship between the regimes and those organizations. To assess weaknesses of the multilateral export control regimes, we analyzed documents and studies noted above and met with officials and representatives of the previously mentioned governments and organizations. In addition, we reviewed listings of denial notifications for all the regimes and approval notifications for the Wassenaar Arrangement to try to identify timeliness and completeness of reporting. In trying to identify the amount of time for members to report denials to each regime, we learned that the regimes do not maintain this data in a manner that allows such an analysis. The Department of State confirmed this limitation in July 2002. We analyzed and compared both the means and frequency with which regime points of contact or secretariats distribute the export denial and, in the case of the Wassenaar Arrangement, approval notifications to the membership. We also identified which countries have and have not reported export denials and the percentages of export denials for each country that has reported them. We also reviewed regulations of the governments of the United States, Japan, and the European Union to determine the time it took to incorporate the most recent changes from the regimes into regulations. To identify obstacles that the United States faces in strengthening the regimes, we analyzed the documents and studies noted above and met with officials and representatives of the noted governments and organizations. We could not fully assess how regime members comply with their commitments or how well efforts to encourage compliance work because of limited access to key Department of State data. Even though 22 U.S. Code Section 2593a requires a report to the Congress each January discussing compliance of countries with various arms control agreements, including the MTCR, the 2000 and 2001 reports have not yet been provided to Congress; and the Department of State declined to provide us access to the report drafts. Consequently, we could not review the reports to determine how other countries are complying with this regime. In addition, we could not fully assess how diplomatic pressure has worked overall to stop questionable transfers of items to nonmember countries for two reasons. The Department of State could not tell us (1) how many demarches in total the United States has provided to other regime members and (2) whether the questionable transfers that the demarches protested were or were not stopped in each case. Although State provided us with about 100 demarches concerning questionable exports from 1998 to 2002, officials from the Departments of Defense and Commerce indicated that the United States delivered an estimated 100 demarches to MTCR members alone, in 2001. We channeled all requests for regime information and documentation through the Department of State and experienced significant delays in obtaining these documents from the Department. After presenting State with an initial document request in September 2001, we reduced the scope of that request in October 2001 to accommodate State’s concerns about the size of the request. In response to the revised request, one State office provided requested documents by December 2001 and was prompt in fulfilling our subsequent requests for documents. Nonetheless, we continued to experience delays from all other State offices in receiving access to documents over the next 7 months. State officials attributed these delays to the Department’s time-consuming process of reviewing every document multiple times before agreeing to provide us with access. We performed our work from August 2001 to September 2002 in accordance with generally accepted government auditing standards. Number of Regimes 4 4 ● represents the regimes applicable to each listed country. Each regime and treaty-related organization maintains lists of sensitive items to be monitored and controlled, but the purpose and composition of each list differs. The Chemical Weapons Convention list of chemicals was intended to be as comprehensive as possible, primarily related to countries’ declarations and destruction of their chemical weapons; and its provisions on transfers have a different goal from that of the Australia Group, according to officials of the Organization for the Prohibition of Chemical Weapons. Also, 20 Australia Group chemicals are not on the Chemical Weapons Convention list, although families of chemicals are listed. Finally, the Chemical Weapons Convention list does not focus on chemical equipment and transfers, but the Australia Group list does. activities. Whether a prospective new member has a legally based effective export control system that puts into effect the MTCR Guidelines and administers and enforces such controls effectively; demonstrates a sustained and sustainable commitment to nonproliferation; would strengthen international nonproliferation efforts. Enforcement of a legally based domestic export control system that gives effect to the commitment to act in accordance with the Nuclear Suppliers Group Guidelines; Ability to supply items (including items in transit) covered by the annexes to Parts 1 and 2 of the Nuclear Suppliers Group Guidelines; Adherence to the Guidelines and action in accordance with them; Adherence to and compliance with one or more of various nonproliferation treaties, including the Nuclear Nonproliferation Treaty or equivalent international nuclear nonproliferation agreement; Support of international efforts towards nonproliferation of WMD and of their delivery vehicles. A state’s adherence to fully effective export controls; Whether a state is a producer/exporter of arms or industrial equipment, respectively; A state’s nonproliferation policies, control lists, and, where applicable, guidelines of the Nuclear Suppliers Group, the MTCR and the Australia Group; and through adherence to the Nuclear Nonproliferation Treaty, the Biological and Toxicological Weapons Convention, the Chemical Weapons Convention and (where applicable) START I, including the Lisbon Protocol. The following are GAO’s comments on the Department of State letter dated October 16, 2002. 1. The Department provided examples of the commitments that governments make when they become members of the multilateral export control regimes. However, simply listing the types of export control commitments these members make says nothing about how these commitments are implemented in practice and whether they are effective. Therefore, it is unclear how State can contend that regime members are effectively implementing regime commitments. 2. We agree with State that proliferators must often look to nonregime suppliers to obtain materials and equipment and discussed this issue in our report. 3. We agree that it is important for regime members to share information on trends in proliferation, procurements, the use of front companies, and end users of concern. We also believe that it is important to collect and share comprehensive licensing information on sensitive export transfers and denials—the building blocks for assessing these broader trends. 4. The Department stated that it sees no utility in sharing increased information about export approvals to nonregime members. This statement is inconsistent with its current policy and practice. For example, on October 11, 2002, the Deputy Assistant Secretary of State for Nonproliferation stated that regime members should share more information on export approvals to facilitate monitoring of regime member compliance with their “no undercut” commitments. Moreover, the U.S. government has led efforts to increase this type of information sharing in two regimes. The Wassenaar Arrangement already expects members to share information on export approvals, and the U.S. government submitted a proposal to the Nuclear Supplier’s Group in 2002 that would provide for reporting export approvals. 5. None of the regimes systematically tracks the time regime members take to implement agreed-upon changes in their control lists. In the absence of this tracking, State cannot demonstrate that time lags have not resulted in proliferators’ obtaining controlled items or that the time lags could not contribute to proliferation. 6. We agree that catch-all controls have been a critical factor in inhibiting proliferators’ attempts to acquire items not on regime control lists. However, as noted in our report, different country standards hamper effective implementation and complicate law enforcement cooperation. 7. Our report already acknowledges that decisions based on consensus are a double-edged sword. As we noted, while the need for consensus hampers the adoption of important decisions, it can also prevent regime members from adopting a position that the United States opposes. 8. During our review, we did not identify any systematic or formal assessments of regime effectiveness routinely conducted by the regimes or their members. Rather, regime statements sometimes assert the effectiveness of the regimes but, as we reported, have established no agreed upon criteria against which these assertions can be assessed. In addition to the individual named above, Jeffrey D. Phillips, Eugene Beye, Lynn Cothern, Nanette Ryen, and Richard Seldin made key contributions to this report.
Multilateral export control regimes are consensus-based, voluntary arrangements of supplier countries that produce technologies useful in developing weapons of mass destruction or conventional weapons. The regimes aim to restrict trade in these technologies to keep them from proliferating states or terrorists. The United States seeks to improve the effectiveness of these regimes. GAO was asked to (1) assess weaknesses of the four regimes and (2) identify obstacles faced in trying to strengthen them. GAO found weaknesses that impede the ability of the multilateral export control regimes to achieve their nonproliferation goals. A key function of each regime is to share information related to proliferation. Yet the regimes often lack even basic information that would allow them to assess whether their actions are having their intended results. The regimes cannot effectively limit or monitor efforts by countries of concern to acquire sensitive technology without more complete and timely reporting of licensing information and without information on when and how members adopt and implement agreed-upon export controls. For example, GAO confirmed that at least one member, the United States, has not reported its denial of 27 export licenses for items controlled by the Australia Group. Several obstacles limit the options available to the United States in strengthening the effectiveness of multilateral export control regimes. The requirement to achieve consensus in each regime allows even one member to block action in adopting needed reforms. Because the regimes are voluntary in nature, they cannot enforce members' compliance with regime commitments. For example, Russia exported nuclear fuel to India in a clear violation of its commitments, threatening the viability of one regime. The regimes have adapted to changing threats in the past. Their continued ability to do so will determine whether they remain viable in curbing proliferation in the future.
In 1998, following a Presidential call for VA and DOD to start developing a “comprehensive, life-long medical record for each service member,” the two departments began a joint course of action toward achieving the capability to share patient health information for active duty military personnel and veterans. As their first initiative, undertaken in that year, the Government Computer- Based Patient Record (GCPR) project was envisioned as an electronic interface that would allow physicians and other authorized users at VA and DOD health facilities to access data from any of the other agencies’ health information systems. The interface was expected to compile requested patient information in a virtual record that could be displayed on a user’s computer screen. Our prior reviews of the GCPR project determined that the lack of a lead entity, clear mission, and detailed planning to achieve that mission made it difficult to monitor progress, identify project risks, and develop appropriate contingency plans. Accordingly, reporting on this project in April 2001 and again in June 2002, we made several recommendations to help strengthen the management and oversight of GCPR. Specifically, in 2001 we recommended that the participating agencies (1) designate a lead entity with final decision- making authority and establish a clear line of authority for the GCPR project, and (2) create comprehensive and coordinated plans that included an agreed-upon mission and clear goals, objectives, and performance measures, to ensure that the agencies could share comprehensive, meaningful, accurate, and secure patient health care data. In 2002 we recommended that the participating agencies revise the original goals and objectives of the project to align with their current strategy, commit the executive support necessary to adequately manage the project, and ensure that it followed sound project management principles. VA and DOD took specific measures in response to our recommendations for enhancing overall management and accountability of the project. By July 2002, VA and DOD had revised their strategy and had made progress toward electronically sharing patient health data. The two departments had renamed the project the Federal Health Information Exchange (FHIE) program and, consistent with our prior recommendation, had finalized a memorandum of agreement designating VA as the lead entity for implementing the program. This agreement also established FHIE as a joint activity that would allow the exchange of health care information in two phases. The first phase, completed in mid-July 2002, enabled the one-way transfer of data from DOD’s existing health information system (the Composite Health Care System) to a separate database that VA clinicians could access. A second phase, finalized this past March, completed VA’s and DOD’s efforts to add to the base of patient health information available to VA clinicians via this one-way sharing capability. According to program officials, FHIE is now fully operational and is showing positive results by providing a wide range of health care information to enable clinicians to make more informed decisions regarding the care of veterans and to facilitate processing disability claims. The officials stated that the departments have now begun leveraging the FHIE infrastructure to achieve interim exchanges of health information on a limited basis, using existing health systems at joint VA/DOD facilities. The departments reported total GCPR/FHIE costs of about $85 million through fiscal year 2003. The revised strategy also envisioned achieving a longer term, two- way exchange of health information between DOD and VA. Known as HealthePeople (Federal), this initiative is premised upon the departments’ development of a common health information architecture comprising standardized data, communications, security, and high-performance health information systems. The joint effort is expected to result in the secured sharing of health data required by VA’s and DOD’s health care providers between systems that each department is currently developing—DOD’s Composite Health Care System (CHCS) II and VA’s HealtheVet VistA. DOD began developing CHCS II in 1997 and has completed its associated clinical data repository—a key component for the planned electronic interface. The department expects to complete deployment of all of its major system capabilities by September 2008. It reported expenditures of about $464 million for the system through fiscal year 2003. VA began work on HealtheVet VistA and its associated health data repository in 2001, and expects to complete all six initiatives comprising this system in 2012. VA reported spending about $120 million on HealtheVet VistA through fiscal year 2003. Under the HealthePeople (Federal) initiative, VA and DOD envision that, upon entering military service, a health record for the service member will be created and stored in DOD’s CHCS II clinical data repository. The record will be updated as the service member receives medical care. When the individual separates from active duty and, if eligible, seeks medical care at a VA facility, VA will then create a medical record for the individual, which will be stored in its health data repository. Upon viewing the medical record, the VA clinician would be alerted and provided with access to the individual’s clinical information residing in DOD’s repository. In the same manner, when a veteran seeks medical care at a military treatment facility, the attending DOD clinician would be alerted and provided with access to the health information in VA’s repository. According to the departments, this planned approach would make virtual medical records displaying all available patient health information from the two repositories accessible to both departments’ clinicians. VA officials anticipated being able to exchange some degree of health information through an interface of their health data repository with DOD’s clinical data repository by the end of 2005. As we have noted, achieving the longer term capability to exchange health data in a secure, two-way electronic format between new health information systems that VA and DOD are developing is a challenging and complex undertaking, in which success depends on having a clearly articulated architecture, or blueprint, defining how specific technologies will be used to deliver the capability. Developing, maintaining, and using an architecture is a best practice in engineering information systems and other technological solutions, articulating, for example, the systems and interface requirements, design specifications, and database descriptions for the manner in which the departments will electronically store, update, and transmit their data. Successfully carrying out the initiative also depends on the departments’ instituting a highly disciplined approach to the project’s management. Industry best practices and information technology project management principles stress the importance of accountability and sound planning for any project, particularly an interagency effort of the magnitude and complexity of this one. Such planning involves developing and using a project management plan that describes, among other factors, the project’s scope, implementation strategy, lines of responsibility, resources, and estimated schedules for development and implementation. Currently, VA and DOD are proceeding with the development of their new health information systems and with the identification of standards that are essential to sharing common health data. DOD is deploying its first release of CHCS II functionality (a capability for integrating DOD clinical outpatient processes into a single patient record), with scheduled completion in June 2006. For its part, VA continues to work toward completing a prototype for the department’s health data repository, scheduled for completion at the end of next month. In addition, as we reported in March, the departments have continued essential steps toward standardizing clinical data, having adopted data and message standards that are important for exchanging health information between disparate systems. Department officials also stated that they were proceeding with a pharmacy data prototype initiative, begun in March to satisfy a mandate of the Bob Stump National Defense Authorization Act for Fiscal Year 2003, as an initial step toward achieving HealthePeople (Federal). The officials maintain that they expect to be positioned to begin exchanging patient health information between their new systems on a limited basis in the fall of 2005, identifying four categories of data that they expect to be able to exchange: outpatient pharmacy data, laboratory results, allergies, and patient demographics. However, VA’s and DOD’s approach to meeting this HealthePeople (Federal) goal is fraught with uncertainty and lacks a solid foundation for ensuring that this mission can be successfully accomplished. As we reported in March, the departments continue to lack an architecture detailing how they intend to use technology to achieve the two-way electronic data exchange capability. In discussing their intentions for developing such an architecture, VA’s Deputy Chief Information Officer for Health stated last week that the departments do not expect to have an established architecture until a future unspecified date. He added that VA and DOD planned to take an incremental approach to determining the architecture and technological solution for the data exchange capability. He explained, for example, that they hope to gain from the pharmacy data prototype project an understanding of what technology is necessary and how it should be deployed to enable the two-way exchange of patient health records between their data repositories. VA and DOD reported approval of the contractor’s technical requirements for the prototype last month and have a draft architecture for the prototype. They expect to complete the prototype in mid-September of this year. Although department officials consider the pharmacy data prototype to be an initial step toward achieving HealthePeople (Federal), how and to what extent the prototype will contribute to defining the electronic interface for a two-way data exchange between VA’s and DOD’s new health information systems are unclear. Such prototypes, if accomplished successfully, can offer valuable contributions to the process of determining the technological solution for larger, more encompassing initiatives. However, ensuring the effective application of lessons learned from the prototype requires that VA and DOD have a well-defined strategy to show how this project will be integrated with the HealthePeople (Federal) initiative. Yet VA and DOD have not developed a strategy to articulate the integration approach, time frames, and resource requirements associated with implementing the prototype results to define the technological features of the two-way data exchange capability under HealthePeople (Federal). Until VA and DOD are able to determine the architecture and technological solution for achieving a secure electronic systems interface, they will lack assurance that the capability to begin electronically exchanging patient health information between their new systems in 2005 can be successfully accomplished. In addition to lacking an explicit architecture and technological solution to guide the development of the electronic data exchange capability, VA and DOD continue to be challenged in ensuring that this undertaking will be managed in a sound, disciplined manner. As was the situation in March, VA and DOD continue to lack a fully established project management structure for the HealthePeople (Federal) initiative. The relationships among the management entities involved with the initiative have not been clearly established, and no one entity has authority to make final project decisions binding on the other. As we noted during the March hearing, the departments’ implementation of our recommendation that it establish a lead entity for the Government Computer-Based Patient Record project helped strengthen the overall accountability and management of that project and contributed to its successful accomplishment. Further, although the departments have designated a project manager and established a project plan defining the work tasks and management structure for the pharmacy prototype, they continue to lack a comprehensive and coordinated project plan for HealthePeople (Federal), to explain the technical and managerial processes that have been instituted to satisfy project requirements for this broader initiative. Such a plan would include, among other information, details on the authority and responsibility of each organizational unit; the work breakdown structure and schedule for all of the tasks to be performed in developing, testing, and deploying the electronic interface; as well as a security plan. The departments also have not instituted necessary project review milestones and measures to provide a basis for comprehensive management of the project at critical intervals, progressive decision making, or authorization of funding for each step in the development process. As a result, current plans for the development of the electronic data exchange capability between VA’s and DOD’s new health information systems do not offer a clear vision for the project or demonstrate sufficient attention to the effective day-to-day guidance of and accountability for the investments in and implementation of this capability. In discussing their management of HealthePeople (Federal), VA and DOD program officials stated this week that the departments had begun actions to develop a project plan and define the management structure for this initiative. Given the significance of readily accessible health data for improving the quality of health care and disability claims processing for military members and veterans, we currently have a draft report at the departments for comment, in which we are recommending to the Secretaries of Veterans Affairs and Defense, a number of actions for addressing the challenges to, and improving the likelihood of, successfully achieving the electronic two-way exchange of patient health information. In summary, VA’s and DOD’s pursuit of various initiatives to achieve the electronic sharing of patient health data represents an important step toward providing more high-quality health care for active duty military personnel and veterans. Moreover, in undertaking HealthePeople (Federal), the departments have an opportunity to help lead the nation to a new frontier of health care delivery. However, the continued absence of an architecture and defined technological solution for an electronic interface for their new health information systems, coupled with the need for more comprehensive and coordinated management of the projects supporting the development of this capability, elevates the uncertainty about how VA and DOD intend to achieve this capability and in what time frame. Until these critical components have been put into place, the departments will continue to lack a convincing position regarding their approach to and progress toward achieving the HealthePeople (Federal) goals and, ultimately, risk jeopardizing the initiative’s overall success. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have at this time. For information about this testimony, please contact Linda D. Koontz, Director, Information Management Issues, at (202) 512-6240 or at koontzl@gao.gov, or Valerie C. Melvin, Assistant Director, at (202) 512-6304 or at melvinv@gao.gov. Other individuals making key contributions to this testimony include Barbara S. Oliver, J. Michael Resser, and Eric L. Trout. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Providing readily accessible health information on veterans and active duty military personnel is highly essential to ensuring that these individuals are given quality health care and assistance in adjudicating disability claims. Moreover, ready access to health information is consistent with the President's recently announced intention to provide electronic health records for most Americans within 10 years. In an attempt to improve the sharing of health information, the Departments of Veterans Affairs (VA) and Defense (DOD) have been working, since 1998, toward the ability to exchange electronic health records for use by veterans, military personnel, and their health care providers. In testimony before Congress last November and again this past March, GAO discussed the progress being made by the departments in this endeavor. While a measure of success has been achieved--the one-way transfer of health data from DOD to VA health care facilities--identifying the technical solution for a two-way exchange, as part of a longer term HealthePeople (Federal) initiative, has proven elusive. At Congress's request, GAO reported on its continuing review of the departments' progress toward this goal of an electronic two-way exchange of patient health records. VA and DOD are continuing with activities to support the sharing of health data; nonetheless, achieving the two-way electronic exchange of patient health information, as envisioned in the HealthePeople (Federal) strategy, remains far from being realized. Each department is proceeding with the development of its own health information system--VA's HealtheVet VistA and DOD's Composite Health Care System (CHCS) II; these are critical components for the eventual electronic data exchange capability. The departments are also proceeding with the essential task of defining data and message standards that are important for exchanging health information between their disparate systems. In addition, a pharmacy data prototype initiative begun this past March, which the departments stated is an initial step to defining the technology for the two-way data exchange, is ongoing. However, VA and DOD have not yet defined an architecture to guide the development of the electronic data exchange capability, and lack a strategy to explain how the pharmacy prototype will contribute toward determining the technical solution for achieving HealthePeople (Federal). As such, there continues to be no clear vision of how this capability will be achieved, and in what time period. Compounding the challenge faced by the departments is that they continue to lack a fully established project management structure for the HealthePeople (Federal) initiative. As a result, the relationships between the departments' managers is not clearly defined, a lead entity with final decision-making authority has not been designated, and a coordinated, comprehensive project plan that articulates the joint initiative's resource requirements, time frames, and respective roles and responsibilities of each department has not yet been established. In discussing the need for these components, VA and DOD program officials stated this week that the departments had begun actions to develop a project plan and define the management structure for HealthePeople (Federal). In the absence of such components, the progress that VA and DOD have achieved is at risk of compromise, as is assurance that the ultimate goal of a common, exchangeable two-way health record will be reached. Given the importance of readily accessible health data for improving the quality of health care and disability claims processing for military members and veterans, we currently have a draft report at the departments for comment, in which we are making recommendations to the Secretaries of Veterans Affairs and Defense for addressing the challenges to, and improving the likelihood of successfully achieving the electronic two-way exchange of patient health information.
Under federal securities laws, public companies are responsible for the preparation and content of financial statements that are complete, accurate, and presented in conformity with generally accepted accounting principles (GAAP). Financial statements, which disclose a company’s financial position, stockholders’ equity, results of operations, and cash flows, are an essential component of the disclosure system on which the U.S. capital and credit markets are based. The Securities Exchange Act of 1934 requires that a public company’s financial statements be audited by an independent public accountant. That statutory independent audit requirement in effect granted a franchise to the nation’s public accountants, as an audit opinion on a public company’s financial statements must be secured before an issuer of securities can go to market, have the securities listed on the nation’s stock exchanges, or comply with the reporting requirements of the securities laws. As of February 2003, there were about 17,988 public companies that were registered with the SEC and subject to the federal securities laws (15,847 domestic and 2,141 foreign public companies). Based on 2001 annual reports of public accounting firms submitted to the AICPA, about 700 public accounting firms that were members of the AICPA's former self- regulatory program for audit quality reported having approximately 15,000 public company clients registered with the SEC, of which the Big 4 public accounting firms had about 70 percent of these public company clients and another 88 public accounting firms had about 20 percent of these public company clients. The other approximately 600 public accounting firms had the remaining 10 percent of the reported public company clients. The independent public accountant’s audit is critical in the financial reporting process because the audit subjects financial statements, which are management’s responsibility, to scrutiny on behalf of shareholders and creditors to whom management is accountable. The auditor is the independent link between management and those who rely on the financial statements. Ensuring auditor independence—both in fact and appearance—is a long- standing issue. There has long been an arguably inherent conflict in the fact that an auditor is paid by the public company for which the audit was being performed. Various study groups over the past 20 years have considered the independence and objectivity of auditors as questions have arisen from (1) significant litigation involving auditors, (2) the auditor’s performance of nonaudit services for audit clients, which prior to the Sarbanes-Oxley Act, had risen to 50 percent of total revenues on average for the large accounting firms, (3) “opinion shopping” by clients, and (4) reports of public accountants advocating questionable client positions on accounting matters. The major accountability breakdowns at Enron and WorldCom, and other failures in recent years such as Qwest, Tyco, Adelphia, Global Crossing, Waste Management, Micro Strategy, Superior Federal Savings Bank, and Xerox, led to the reforms contained in the Sarbanes-Oxley Act to enhance auditor independence and audit quality and to restore investor confidence in the nation’s capital markets. To enhance auditor independence and audit quality, the act’s reforms included establishing the PCAOB, as an independent nongovernmental entity, to oversee the audit of public companies that are subject to the securities laws; making the PCAOB responsible for (1) establishing auditing and related attestation, quality control, ethics, and independence standards applicable to audits of public companies, (2) conducting inspections, investigations, and disciplinary proceedings of public accounting firms registered with the PCAOB, and (3) imposing appropriate sanctions; making the public company’s audit committee responsible for the appointment, compensation, and oversight of the registered public accounting firm; requiring management and auditors’ reports on internal control over prohibiting the registered public accounting firm from providing certain nonaudit services to a public company if the auditor is also providing audit services; requiring the audit committee to preapprove all audit and nonaudit services not otherwise prohibited; requiring mandatory rotation of lead and reviewing audit partners after they have provided audit services to a particular public company for 5 consecutive years; and prohibiting the public accounting firm from providing audit services if the public company’s chief financial officer, chief accounting officer, or any person serving in an equivalent position was employed by the firm and participated in the audit of the public company during the 1-year period preceding the date of starting the audit. Mandatory audit firm rotation was also discussed in congressional hearings to enhance auditor independence and audit quality, but given the mixed views of various stakeholders, the Congress decided the effects of such a practice needed further study. Our review of research studies, technical articles, and other publications and documents showed that generally the arguments for and against mandatory audit firm rotation concern auditor independence, audit quality, and increased audit costs. A breakdown in auditor independence or audit quality can result in an audit failure and adversely affect those parties who rely on the fair presentation of the financial statements in conformity with GAAP. Those who support mandatory audit firm rotation contend that pressures faced by the incumbent auditor to retain the audit client coupled with the auditor’s comfort level with management developed over time can adversely affect the auditor’s actions to appropriately deal with financial reporting issues that materially affect the company’s financial statements. Those who oppose audit firm rotation contend that the new auditor’s lack of knowledge of the company’s operations, information systems that support the financial statements, and financial reporting practices and the time needed to acquire that knowledge increase the risk of an auditor not detecting financial reporting issues that could materially affect the company’s financial statements in the initial years of the new auditor’s tenure, resulting in financial statements that do not comply with GAAP. In addition, those who oppose mandatory audit firm rotation believe that it will increase costs incurred by both the public accounting firms and the public companies. They believe the increased risk of an audit failure and the added costs of audit firm rotation outweigh the value of a periodic “fresh look” by a new public accounting firm. Conversely, those who support audit firm rotation believe the value of the “fresh look” to protect shareholders, creditors, and other parties who rely on the financial statements outweigh the added costs associated with mandatory firm rotation. More recently, the Sarbanes-Oxley Act’s requirements that concern auditor independence and audit quality have added to the mixed views about whether mandatory audit firm rotation should also be required to enhance auditor independence and audit quality. The results of our surveys show that while auditor tenure at Fortune 1000 public companies averages 22 years, about 79 percent of Tier 1 firms and Fortune 1000 public companies are concerned that changing public accounting firms increases the risk of an audit failure in the initial years of the audit as the new auditor acquires the knowledge of a public company’s operations, systems, and financial reporting practices. Further, many Fortune 1000 public companies will only use Big 4 public accounting firms and believe that the limited choices, that are likely to be further reduced by the auditor independence requirements of the Sarbanes-Oxley Act, coupled with the likely increased costs of financial statement audits and increased risk of an audit failure under mandatory audit firm rotation strongly argue against the need for mandatory rotation. In addition, most Tier 1 firms and Fortune 1000 public companies believe that the pressures faced by the incumbent auditor to retain the client are not a significant factor adversely affecting the auditor appropriately dealing with financial reporting issues that may materially affect a public company’s financial statements. Most Tier 1 firms, and nearly all Fortune 1000 public companies, and their audit committee chairs believe that the Sarbanes-Oxley Act’s requirements concerning auditor independence and audit quality, when fully implemented, will sufficiently achieve the intended benefits of mandatory audit firm rotation, and therefore, they believe it would be premature to impose mandatory audit firm rotation at this time. Finally, about 50 percent of Tier 1 firms and 62 percent of Fortune 1000 public companies stated that mandatory audit firm rotation would have no effect on the perception of auditor independence held by the capital markets and institutional investors. However, 65 percent of Fortune 1000 public companies reported that individual investors’ perception of auditor independence would be increased, while the Tier 1 firms had mixed views on the effect on individual investors’ perceptions. At the same time, most Tier 1 firms reported that mandatory audit firm rotation may negatively affect audit assignment staffing, causing an increased risk of audit failures, and may create some confusion as currently a change in a public company’s auditor of record sends a “red flag” signal as to why the change may have occurred. In contrast, most Fortune 1000 public companies did not believe scheduled changes in the auditor of record would result in a “red flag” signal. Currently, neither the SEC nor the PCAOB has set any regulatory limits on the length of time that a public accounting firm may function as the auditor of record for a public company. Based on the responses to our surveys, we estimate that about 99 percent of Fortune 1000 public companies and their audit committees currently do not have a public accounting firm rotation policy, although we estimate that about 4 percent are considering such a policy. Unlimited tenure and related pressure on the public accounting firm and applicable partner responsible for providing audit services to the company to retain the client and the related continuing revenues are factors cited by those who support mandatory audit firm rotation. They believe that periodically having a new auditor will bring a “fresh look” to the public company’s financial reporting and help the auditor appropriately deal with financial reporting issues since the auditor’s tenure would be limited under mandatory audit firm rotation. Those who oppose mandatory audit firm rotation believe that changing auditors increases the risk of an audit failure during the initial years as the new auditor acquires the knowledge of the public company’s operations, systems, and financial reporting practices. The Conference Board’s Commission on Public Trust and Private Enterprise in its January 9, 2003, report recommended that audit committees should consider rotating audit firms when there is a combination of circumstances that could call into question the audit firm’s independence from management. The Commission believed that the existence of some or all of the following circumstances particularly merit consideration of rotation: (1) significant nonaudit services are provided by the auditor of record to the company—even if they have been approved by the audit committee, (2) one or more former partners or managers of the audit firm are employed by the company, or (3) the audit firm has been employed by the company for a substantial period of time, such as over 10 years. To initially examine the issues surrounding the length of the auditors’ tenure, we asked public companies and public accounting firms to provide information on the length of auditor tenure. According to our survey, Fortune 1000 public companies’ average auditor tenure is 22 years. Two contrasting factors greatly influence this 22-year average—the recent increased changes in auditors lowered the average and the long audit tenure period associated with approximately 10 percent of Fortune 1000 public companies raised the average. About 20 percent of the Fortune 1000 public companies had their current auditor of record for less than 3 years, a rate of change in auditors over the last 2 years substantially greater than the nearly 3 percent annual change rate historically observed. This increased rate of auditor change was driven largely by the recent dissolution of Arthur Andersen LLP. More than 80 percent of Fortune 1000 public companies that changed auditors over the last 2 years did so to replace Andersen. Increasing the overall average audit tenure period for Fortune 1000 public companies were the approximately 10 percent of public companies that had the same auditing firm for more than 50 years and have an average tenure period of more than 75 years. Excluding those Fortune 1000 public companies that have replaced Andersen in the last 2 years as well as those companies that had the same auditor of record for more than 50 years, the average for the remaining Fortune 1000 public companies is 19 years. See figure 1 for the Fortune 1000 public companies’ estimated audit firm tenure. An intended effect of mandatory audit firm rotation is to decrease the existing lengthy auditor tenure periods, thus lessening concerns about the firm’s desire to retain a client adversely affecting auditor independence. The Fortune 1000 public companies that hired a new audit firm to replace Andersen over the last 2 years reported that Andersen had served as their companies’ auditor of record for an average of 26 years. About 97 percent of Fortune 1000 public companies expected that mandatory audit firm rotation would lower the number of consecutive years that a public accounting firm could serve as their auditor of record. The Fortune 1000 public companies were not given a possible limit on the number of years that a public accounting firm could serve as their auditor of record under mandatory audit firm rotation. Therefore, they reported their general belief that mandatory rotation would have the effect of decreasing auditor tenure based on their past experiences. Since the new auditor’s knowledge and experience with auditing a public company after a change in auditors is a concern, we asked public accounting firms and public companies a number of questions about factors important to detecting material misstatements of financial statements. Tier 1 firms noted that a number of factors affect the auditor’s ability to detect financial reporting issues that may indicate material misstatements in a public company’s financial statements, including education, training, and experience; knowledge of GAAP and GAAS; experience with the company’s industry; appropriate audit team staffing; effective risk assessment process for determining client acceptance; and knowledge of the client’s operations, systems, and financial reporting practices. Although each of the above factors affects the quality of an audit, opponents of mandatory audit firm rotation focus on the increased risk of audit failure that may result from the new auditor’s lack of specific knowledge of the client’s operations, systems, and financial reporting practices. Based on the responses to our survey, we estimated that about 95 percent of Tier 1 firms would rate such specific knowledge as either of very great importance or great importance in the auditor’s ability to detect financial reporting issues that may indicate material misstatements in a public company’s financial statements. GAAS require the auditor to obtain a sufficient knowledge of the client’s operations, systems, and financial reporting practices to assess audit risk and to gather sufficient competent evidential matter. About 79 percent of Tier 1 firms and Fortune 1000 public companies believed that the risk of an audit failure is higher in the early years of audit tenure as the new firm is more likely to not have fully developed and applied an in-depth understanding of the public company’s operations and processes affecting financial reporting. More than 83 percent of Tier 1 firms and Fortune 1000 public companies that expressed a view stated that it generally takes 2 to 3 years or more to become sufficiently familiar with the companies’ operations and processes before the additional resources often needed to become knowledgeable are no longer needed. Tier 1 firms had mixed views about whether mandatory audit firm rotation (e.g., the “fresh look”) would either increase, decrease or have no effect on the new auditor’s likelihood of detecting financial reporting issues that may materially affect the financial statements that the previous auditor may not have detected. However, 50 percent of Fortune 1000 public companies reported that mandatory audit firm rotation would have no effect on the auditor’s likelihood of detecting such financial reporting issues, while other Fortune 1000 public companies were generally split regarding whether mandatory audit firm rotation would either increase or decrease the auditor’s likelihood of detecting such financial reporting issues. As shown in figure 2, Tier 1 firms had mixed views of the value of additional audit procedures during the initial years of a new auditor’s tenure, although 72 percent reported that additional audit procedures would be of at least some value in helping to reduce audit risk to an acceptable level. Most Fortune 1000 public companies believed such additional audit procedures would decrease audit risk, as shown in figure 3. The Tier 1 firms were also asked about the potential value of having enhanced access to key members of the previous audit team and its audit documentation to help reduce audit risk. The Tier 1 firms generally saw more potential value in having enhanced access to the previous audit team and its audit documentation than in performing additional audit procedures and verification of the public company’s data during the initial years of the auditor’s tenure. Nearly all of the Tier 1 firms believed that access to the previous audit team and its audit documentation could be accomplished under current GAAS. Proponents of mandatory audit firm rotation cite that pressures to retain the client can adversely affect the auditor’s decision to appropriately deal with financial reporting issues when public company management is not supportive of the auditor’s position on what is required by GAAP. They believe that mandatory audit firm rotation would serve as an incentive for the auditor to take the appropriate action since the auditor would know that tenure as auditor of record and the related revenues are for a limited term. We asked public accounting firms and public companies based on their experiences whether the auditor’s length of tenure is a factor in whether the auditor appropriately deals with material financial reporting issues and whether mandatory audit firm rotation would affect the pressures the firms face. About 69 percent of Tier 1 firms and 73 percent of Fortune 1000 public companies do not believe that the risk of an audit failure increases due to the auditors’ long-term relationship with the public companies’ management under a long audit tenure and the auditors’ desire to retain the clients. About 55 percent of the other Tier 1 firms and 65 percent of the other Fortune 1000 public companies were uncertain whether the risk of an audit failure would increase or decrease due to the auditors’ long-term tenure. About 71 percent of Tier 1 firms and 67 percent of Fortune 1000 public companies believe that pressure on the engagement partner to retain the client is currently small or not a factor in whether the auditor appropriately deals with financial reporting issues that may materially affect a public company’s financial statements. However, 28 percent of Tier 1 firms and 33 percent of Fortune 1000 public companies believe such pressures are moderate or stronger. About 18 percent of Tier 1 firms and Fortune 1000 public companies believed that under mandatory audit firm rotation, the pressures on the engagement partner would still be a moderate or stronger factor in retaining the audit client and in appropriately dealing with financial reporting issues. Therefore, based on these views, mandatory audit firm rotation would likely somewhat reduce the pressures on the engagement partner to retain the client. However, most Tier 1 firms and Fortune 1000 public companies generally considered these pressures to be small or not a factor in the auditor appropriately dealing with material financial reporting issues. Tier 1 firms and Fortune 1000 public companies expressed similar views, that mandatory audit firm rotation would not significantly change the pressures on the engagement partner to retain the client as a factor in whether the engagement partner appropriately challenges overly aggressive/optimistic financial reporting by management. As shown in figure 4, overall about 54 percent of Tier 1 firms and 71 percent of Fortune 1000 public companies believe mandatory audit firm rotation overall would have no effect on the new auditor’s potential for appropriately dealing with material financial reporting issues. The remaining Tier 1 firms are split between whether mandatory audit firm rotation would increase or decrease their potential to appropriately deal with material financial reporting issues. However, about 67 percent of the remaining Fortune 1000 public companies believe that mandatory audit firm rotation would increase the potential for the new auditor to deal appropriately with such financial reporting issues. In contrast, either with or without mandatory audit firm rotation, about 62 percent of Tier 1 firms and 63 percent of Fortune 1000 public companies believe the potential of a subsequent lawsuit, regulatory action, or both against the public accounting firm and its engagement partner is a moderate or stronger pressure for them to deal appropriately with financial reporting issues that may materially affect a public company’s financial statements. Researchers have also raised questions about how the capital markets’ and investors’ current perceptions of auditor independence and audit quality would be affected by mandatory audit firm rotation. Under mandatory audit firm rotation, about 52 percent of Tier 1 firms and about 62 percent of Fortune 1000 public companies believed that the current perception of auditor independence held by capital markets and institutional investors would not be affected by requiring mandatory audit firm rotation while 34 percent of Tier 1 firms and about 38 percent of Fortune 1000 public companies believed the perception of auditor independence would increase. However, about 65 percent of Fortune 1000 public companies believed that perception of auditors’ independence held by individual investors would more likely increase under mandatory audit firm rotation while the Tier 1 firms had mixed views on the effect on individual investors. See the Overall Views of Other Knowledgeable Individuals on Mandatory Audit Firm Rotation section of the report for the results of our discussions with other knowledgeable individuals, including institutional investors, for their views on how mandatory audit firm rotation may affect their perception of auditor independence. Our research into the effects of mandatory audit firm rotation identified concerns about whether public accounting firms would move their most knowledgeable and experienced audit personnel from the current audit to other audits as the end of their tenure as auditor of record approached in order to attract or retain other clients. In response to our survey questions about whether mandatory audit firm rotation would affect assignment of audit staff, about 59 percent of Tier 1 firms indicated that they would likely move their most knowledgeable and experienced audit staff to other work to enhance the firm’s ability to attract or retain other clients and another 28 percent were undecided. Only about 13 percent of Tier 1 firms stated it was unlikely that an accounting firm would move staff to other work. Of the Tier 1 firms that stated they would likely move their most knowledgeable and experienced staff, 86 percent believe that moving these staff would increase the risk of an audit failure. About 92 percent of Fortune 1000 public companies also believed that by moving these audit staff, the risk of an audit failure would be increased. Opponents of mandatory audit firm rotation expressed concern that limited audit tenure under mandatory rotation could cause public accounting firms to not invest in audit tools related to the effectiveness of auditing a specific client or industry. About 76 percent of Tier 1 firms stated that their average audit tenure would likely decrease under mandatory audit firm rotation, and about 97 percent of Fortune 1000 public companies expected the length of their auditors’ tenure would decrease compared to their previous experience with changing auditors. In response to our survey questions about this possibility, about 64 percent of these Tier 1 firms said mandatory audit firm rotation would not likely decrease incentives to invest the resources needed to understand the client’s operations and financial reporting practices in order to devise effective audit procedures and tools, while 36 percent said it would. Conversely, about 67 percent of Fortune 1000 public companies were concerned that mandatory audit firm rotation could negatively affect incentives for public accounting firms to invest in effective audit procedures and tools. Currently, when a change in the auditor of record occurs it acts as a “red flag” signal to investors to question why the change occurred and if the change may have occurred because of reasons related to the presentation of the public company’s financial statements, such as differences in views of public company management and the auditor of record regarding financial reporting issues. Researchers have raised concerns that the “red flag” signal may be eliminated by mandatory audit firm rotation, as investors may not be able to distinguish a scheduled change from a nonscheduled change in a public company’s auditor of record. Regarding the “red flag” signal, most Tier 1 firms believed that mandatory audit firm rotation would not change the current reaction by investors to a change in the auditor of record, and therefore a “red flag” signal is likely to be perceived by investors for both scheduled and unscheduled changes in the public company’s auditor of record. Several Tier 1 firms commented that users of financial statements would not be able to readily track scheduled rotations and therefore would be confused whether the change in auditors was scheduled or unscheduled. In contrast, most Fortune 1000 public companies believed that scheduled auditor changes under mandatory audit firm rotation would likely not produce a “red flag” signal and that the “red flag” signal for unscheduled changes in the auditor of record would be retained. Fortune 1000 public companies did not provide any comments to further explain their beliefs. However, currently, public companies are required by SEC regulations to report changes in their auditor of record to the SEC. Therefore, public companies could use this reporting requirement to disclose whether the change in auditor of record under mandatory audit firm rotation was scheduled or unscheduled. Opponents of mandatory audit firm rotation believe that the more frequent change in auditors likely to occur under mandatory audit firm rotation will result in the public accounting firms and ultimately public companies incurring increased costs for audits of financial statements. These costs include marketing costs (the costs incurred by public accounting firms related to their efforts to acquire or retain financial statement audit clients), audit costs (the costs incurred by a public accounting firm to perform an audit of a public company’s financial statements), audit fee (the amount a public accounting firm charges the public company to perform the financial statement audit), selection costs (the internal costs incurred by a public company in selecting a new public accounting firm as the public company’s auditor of record), and support costs (the internal costs incurred by a public company in supporting the public accounting firm’s efforts to understand the public company’s operations, systems, and financial reporting practices). About 96 percent of Tier 1 firms stated that their initial year audit costs are likely to be more than in subsequent years in order to acquire the necessary knowledge during a first year audit of a public company’s operations, systems, and financial reporting practices. Nearly all of these Tier 1 firms estimated initial year audit costs would be more than 20 percent higher than subsequent years’ costs. Similar responses were received from Fortune 1000 public companies. (See fig. 5.) About 85 percent of Tier 1 firms stated that currently they are more likely to absorb their higher initial year audit costs than to pass them on to the public companies in the form of higher audit fees because of the firms’ interest in retaining the audit client. However, about 87 percent said such costs would likely be passed on to the public companies during the more limited audit firm tenure period under mandatory rotation. Similarly, about 77 percent of Fortune 1000 public companies stated that currently when a change in the companies’ auditor of record occurs, the additional initial year audit costs are likely to be absorbed by the public accounting firms. However, about 97 percent of the Fortune 1000 public companies expected the higher initial year audit costs would be passed on to them under mandatory audit firm rotation. Comments received from a number of the Tier 1 firms indicated that currently initial years’ audit costs are recovered from the public companies over the firms’ tenure as auditor of record. However, the firms under mandatory audit firm rotation expected not to be able to recover the costs within a more limited tenure as auditor of record. Therefore, they would pass the costs on to the public companies through higher audit fees. Similarly, about 89 percent of Fortune 1000 public companies believed that mandatory audit firm rotation would lead to higher audit fees over time. With the likely more frequent opportunities to compete for providing audit services to public companies under mandatory audit firm rotation, about 79 percent of Tier 1 firms expect to incur increased marketing costs associated with their efforts to acquire audit clients, and about 79 percent of the Tier 1 firms expect to pass these costs on to the public companies through higher audit fees. As shown in figure 6, most of the Tier 1 firms expecting higher marketing costs estimated that the cost would add at least more than 1 percent to their initial year audit fees, and about 37 percent of these Tier 1 firms believed their additional marketing costs would be more than 10 percent of their initial year audit fees. A number of Tier 1 firms commented that they would have to spend more time marketing auditing services, including writing new proposals to compete for audit services. About 85 percent of Fortune 1000 public companies expected that public accounting firms would likely incur additional marketing costs under mandatory audit firm rotation, and about 92 percent of these Fortune 1000 public companies believed the costs would be passed on to them. In addition to higher audit fees, nearly all Fortune 1000 public companies believed they would incur selection costs in hiring a new auditor of record under mandatory audit firm rotation. As shown in figure 7, most of those Fortune 1000 public companies expected the selection costs to be at least 6 percent or higher as a percentage of initial year audit fees. In addition, nearly all Fortune 1000 public companies expected to incur some additional initial year auditor support costs under mandatory audit firm rotation. As shown in figure 8, nearly all of those Fortune 1000 public companies believed their additional support costs would be 11 percent or higher as a percentage of initial year audit fees. Tier 1 firms’ views on the likelihood of public companies incurring selection costs and additional auditor support costs were similar to the views of Fortune 1000 public companies. To provide some perspective on the possible impact of higher audit-related costs (audit fees, company selection, and support cost) on public company operating costs, we analyzed financial reports filed with the SEC for a selection of large and small public companies for the most recent fiscal year available—one of each from 23 broad industry sectors, such as agriculture, manufacturing, and information services. Where available, for each industry sector, we selected a public company with annual revenues of more than $5 billion and a public company with annual revenues of less than $1 billion. The audit fees reported by the larger public companies we selected ranged from .007 percent to .11 percent of total operating costs and averaged .04 percent. The audit fees reported by the smaller public companies we selected ranged from 0.017 percent to 3.0 percent and averaged 0.08 percent. Utilizing the predominant responses from Tier 1 firms, we estimate the additional first year audit costs following a change in auditor to likely range from 21 percent to 39 percent more than annual costs of recurring audits of the same client. In addition, we estimate the additional firm marketing costs under mandatory audit firm rotation to likely range from 6 percent to 11 percent of the firm’s initial year audit fees. Based on the predominant responses from Fortune 1000 public companies, we also estimate the additional public company selection costs to range from 1 percent to 14 percent of the new auditor’s initial year audit fees and possible additional public company support costs to range from 11 percent to 39 percent of the new auditor’s initial year audit fees. Utilizing these ranges, we estimate that following a change in auditor under mandatory audit firm rotation, the possible additional first year audit-related costs could range from 43 percent to 128 percent higher than the likely recurring audit costs had there been no change in auditor. We also calculated a weighted average percentage for each additional cost category using all responses from Tier 1 firms and Fortune 1000 public companies (as opposed to the predominant responses only). Using the resulting weighted averages for all responses, we calculated the potential additional first year audit-related costs to be 102 percent higher than the likely recurring audit costs had there been no change in auditor. This illustration is intended only to provide insights into how Tier 1 firms and Fortune 1000 public companies reported that mandatory audit firm rotation could affect the initial year audit costs and is not intended to be representative. Although mandatory audit firm rotation is generally considered by its proponents as a means of enhancing auditor independence and audit quality, mandatory rotation may also provide increased opportunities for some public accounting firms to compete to provide audit services to public companies. About 52 percent of Tier 1 firms believed that mandatory audit firm rotation would increase the opportunity to compete for public company audits and 30 percent were uncertain whether opportunities to compete to provide audit services would increase or decrease. However, when asked how mandatory audit firm rotation would likely affect the number of firms actually willing and able to compete for public company audits, about 54 percent of Tier 1 firms said mandatory rotation would likely decrease the number of firms competing for audits of public companies, 14 percent expected an increase in the number of firms, and 22 percent expected no effect on the number of firms competing. Although nearly all Tier 1 firms planned to register with the PCAOB to provide audit services to public companies, about 24 percent of Tier 1 firms that currently provide audit services were uncertain whether they would continue to provide audit services to public companies if mandatory audit firm rotation were required. Firms in Tier 2 that responded to our survey showed more uncertainty regarding whether to register with the PCAOB, with about two-thirds planning to continue to provide audit services to public companies and most of the remaining respondents uncertain if they would continue to provide audit services to public companies. However, if mandatory audit firm rotation were required, 55 percent of the Tier 2 firms that responded to our survey that currently provide audit services to public companies were uncertain whether they would continue to provide the audit services to public companies, and another 12 percent said they would discontinue providing audit services to public companies. The view of many Tier 1 firms that mandatory audit firm rotation may lead to fewer firms willing and able to compete for public company audits, which would lead to higher audit fees, should also be considered along with the results of our study of consolidation of the Big 8 firms into the current Big 4 firms. In that respect, we previously reported that the Big 4 audit over 78 percent of all U.S. public companies and 99 percent of public company annual sales. However, we found no empirical evidence of impaired competition. Further, we previously reported that smaller public accounting firms were unable to successfully compete for the audits of large national and multinational public companies because of factors such as lack of capacity and capital limitations. About 83 percent of Tier 1 firms and 66 percent of Fortune 1000 public companies stated that under mandatory audit firm rotation, the market share of public company audits would either become more concentrated in a small number of larger public accounting firms or the already highly concentrated market share would remain about the same. About 44 percent of Tier 1 firms believed that incentives to create or maintain large firms would be increased while 32 percent believed mandatory audit firm rotation would have no effect on incentives to create or maintain large firms. About 52 percent of Fortune 1000 public companies were at least somewhat concerned that the dissolution of Arthur Andersen LLP, resulting now in the Big 4 public accounting firms, would significantly limit the options their companies have in selecting a capable auditor of record. Under mandatory audit firm rotation, the number of Fortune 1000 public companies expressing such concern increased to 79 percent. About 48 percent of Tier 1 firms believed mandatory audit firm rotation would decrease the number of firms willing and able to compete for audits of public companies in specialized industries, while 29 percent of Tier 1 firms believed mandatory audit firm rotation would have no effect. As noted in our July 2003 report, we found that in certain specialized industries, the number of firms with expertise in auditing those industries can limit the number of choices such public companies have to two public accounting firms. Contributing to this situation is that many public companies will use only Big 4 firms for audit services. Also, public companies may have fewer choices in the future as auditor independence rules under the Sarbanes-Oxley Act prohibiting the auditor of record from also providing certain nonaudit services could further reduce the number of eligible auditors. In that respect, mandatory audit firm rotation would further affect the number of eligible auditors. For example, if a public company in a specialized industry has only three or four choices for its auditor of record, the current auditor of record is not eligible to repeat as auditor of record under mandatory audit firm rotation, and another firm is not eligible because it provided prohibited nonaudit services that affect auditor independence to the public company, then the number of eligible firms would be reduced to one or two firms. About 35 percent of Fortune 1000 public companies were at least somewhat concerned that the Sarbanes-Oxley Act auditor independence requirements would significantly limit their options in selecting a capable auditor of record. However, 53 percent of Fortune 1000 public companies expressed such concern if mandatory audit firm rotation were required. The Sarbanes-Oxley Act requires the audit committee to hire, compensate, and oversee the public accounting firm serving as auditor of record for the public company. About 92 percent of the Fortune 1000 audit committee chairs stated that their public companies currently use Big 4 firms as auditor of record, and 94 percent of those that do stated that they would not realistically consider using non-Big 4 firms as the public companies’ auditor of record. Table 1 provides reasons given by the audit committee chairs for only using Big 4 firms and the importance of those reasons to them. Although the Sarbanes-Oxley Act now makes the audit committee responsible for hiring the public company’s auditor of record, 96 percent of Fortune 1000 public companies currently using Big 4 firms also stated that they would not realistically consider using non-Big 4 firms as the companies’ auditor of record. They generally gave the same reasons as the audit committee chairs. In our surveys, we asked public accounting firms, public companies, and their audit committee chairs to provide their overall views on the potential costs and benefits that may result under mandatory audit firm rotation. About 85 percent of Tier 1 firms, 92 percent of Fortune 1000 public companies, and 89 percent of Fortune 1000 audit committee chairs believed that costs are likely to exceed benefits. Our surveys also requested views whether the Sarbanes-Oxley Act auditor independence and related audit quality requirements could also achieve the intended benefits of mandatory audit firm rotation. The act, as implemented by SEC rules, requires the mandatory rotation of both lead and reviewing audit engagement partners after 5 years and after 7 years for other partners with significant involvement in the audit engagement. Other related provisions of the act concerning auditor independence and audit quality include prohibiting the auditor of record from also providing certain nonaudit services, requiring audit committee preapproval of audit and nonaudit services not otherwise prohibited and related public disclosures, establishing certain auditor reporting requirements to the audit committee, requiring time restrictions before certain auditors could be hired by the client as employees, expanding audit committee responsibilities, and establishing the PCAOB as an independent nongovernmental entity overseeing registered public accounting firms in the audit of public companies. About 66 percent of Tier 1 firms believe the audit partner rotation requirements sufficiently achieve the intended benefits of a “fresh look” of mandatory audit firm rotation. Another 27 percent of the Tier 1 firms believe that the audit partner rotation requirements may not be as effective as mandatory audit firm rotation in achieving the intended benefits of a “fresh look,” but is a better choice given the higher cost of mandatory audit firm rotation. Fortune 1000 public companies and audit committee chairs responding to our survey expressed similar views. We asked those Tier 1 firms and Fortune 1000 public companies and their audit committee chairs who did not believe that the partner rotation requirement by itself sufficiently achieved the intended benefits of mandatory audit firm rotation to consider the auditor independence, audit quality, and partner rotation requirements of the Sarbanes-Oxley Act as implemented by SEC rules and their views on whether these requirements in total would likely achieve the intended benefits of mandatory audit firm rotation when fully implemented. About 25 percent of these Tier 1 firms believed these requirements of the Sarbanes-Oxley Act, when fully implemented, would sufficiently achieve the intended benefits of mandatory audit firm rotation, while 63 percent believed these requirements would only somewhat or minimally achieve the intended benefits of mandatory audit firm rotation when fully implemented. Conversely, 76 percent of Fortune 1000 public companies and 72 percent of their audit committee chairs believed these requirements would sufficiently achieve the intended benefits of mandatory audit firm rotation. Combining the responses to the above two questions for those who believed either the partner rotation requirements or the partner rotation requirements coupled with the other Sarbanes-Oxley Act auditor independence and audit quality requirements would sufficiently achieve the benefits of mandatory audit firm rotation shows that about 75 percent of the Tier 1 firms, 95 percent of Fortune 1000 public companies, and about 92 percent of the audit committee chairs believe these requirements, when fully implemented, would sufficiently achieve the benefits of mandatory audit firm rotation. Most Tier 1 firms and Fortune 1000 public companies and their audit committee chairs believe the Sarbanes-Oxley Act auditor independence and audit quality requirements, when fully implemented, would sufficiently achieve the benefits of mandatory audit firm rotation, and most of these groups when asked their overall opinion on mandatory audit firm rotation did not support mandatory rotation. A minority within these groups supports the concept of mandatory audit firm rotation, but believes more time is needed to evaluate the effectiveness of the various Sarbanes-Oxley Act requirements for enhancing auditor independence and audit quality. (See fig. 9.) As part of our review, we spoke to a number of knowledgeable individuals to obtain their views on mandatory audit firm rotation to provide additional perspective on issues addressed in the survey. These individuals had experience in a variety of fields, such as institutional investment; regulation of the stock markets, the banking industry, and the accounting profession; and consumer advocacy. Generally, the views expressed by these knowledgeable individuals were consistent with the overall views expressed by survey respondents. Most did not favor implementing a requirement for mandatory audit firm rotation at this time because they believe the costs of implementing such a requirement outweigh the benefits and greater experience with implementing the requirements of the Sarbanes-Oxley Act should be gained prior to adding new requirements. Many individuals acknowledged that conceptually, audit firm rotation could provide certain benefits in the areas of auditor independence and audit quality. For example, audit firm rotation may increase the perception of auditor independence because long-term relationships between the auditor of record and the client that could undermine independence would not likely develop under the limited term as auditor of record. Some individuals also believe that under mandatory audit firm rotation, the auditor might be less likely to succumb to management pressure to accept questionable accounting practices because the incentive to keep the client is gone and another audit firm would be looking at the firm’s work in the future. Some also believed that audit quality may also be increased through a change in auditors because a new auditor of record would provide a "fresh look" at an entity’s financial reporting practices and accounting policies. In addition, some individuals noted that mandatory audit firm rotation might cause a company to reexamine its audit needs and seek more knowledgeable and experienced audit firm personnel when negotiating for a new auditor of record. The individuals we spoke to, however, acknowledged a number of practical concerns related to mandatory audit firm rotation, one of the most important being the limited number of audit firms available from which to choose. For example, some companies, especially those with geographically diverse operations or those operating in certain industries, may be somewhat limited in the choice of auditing firms capable of performing the audit. Not all audit firms have offices or staff located in all the geographic areas, whether domestically or internationally, where the clients conduct their operations, nor do all audit firms have personnel with certain industry knowledge to be able to perform audits of clients that operate in specific environments. Similar to the views of Fortune 1000 public companies and audit committee chairs, individuals we spoke to noted that large companies are often limited to choices among the Big 4 firms. In some cases, the choices are further restricted because the accounting profession has become segmented by industry, and a lack of industry-specific knowledge may preclude some firms from performing the audits. For a company that is limited to use of Big 4 firms, it was viewed that selection may also be restricted because an audit firm providing certain nonaudit services or serving as a company’s internal auditor is prohibited by independence rules from also serving as that company’s auditor of record. In some cases, a company may also be limited in its choice of firms if an audit firm audits one of the company’s major competitors and the public company decides not to use that firm as its auditor of record. With regard to the use of a Big 4 firm, some individuals believe that although a new auditor provides a "fresh look" at an audit engagement, the Big 4 audit firms have somewhat similar cultures and methodologies for performing audits, and as a result, the benefit of a "fresh look" is more limited today than it was in the past when the firms had different cultures and employed a greater variety of methodologies. Many individuals we spoke with also noted that when a change in auditor of record occurs, a learning curve, which can last a year or more, exists while the new auditor becomes familiar with the client’s operations, thus increasing the audit risk associated with the engagement. Although a new auditor provides a "fresh look" for the audit, concern was raised that a new auditor may challenge the previous auditor’s judgments in an overly aggressive manner because the new auditor is not familiar with the client’s operations or accounting policies, and this poses a problem for the public company because the previous auditor is not present to explain the rationale for those judgments. It was viewed that in some cases, these are matters of professional judgment rather than actual errors and that such a situation could result in increased tension between the client and new auditor of record. Some individuals we spoke with expressed concern that if mandatory audit firm rotation were implemented, the audit firm may rotate its most qualified staff off the engagement during the later years of audit tenure because the audit firm might focus its resources on obtaining or providing services to new clients. These individuals believe that such a practice would increase audit risk, as did most Tier 1 firms and Fortune 1000 public companies. Some individuals also expressed concern that toward the end of audit tenure, an audit firm might shift its attention to marketing nonaudit services the firm could provide when it was no longer the auditor of record, which may be counter to the intended benefits of mandatory audit firm rotation. Individuals we spoke with also noted other implementation issues with mandatory audit firm rotation. For example, they viewed mandatory audit firm rotation as increasing costs to a company, not only in terms of higher audit fees but also in additional selection and support costs. In particular, many individuals we spoke with, as did most Tier 1 firms and Fortune 1000 public companies, believed that when a company rotates auditors, a certain amount of disruption occurs and the company spends a significant amount of resources—both financial and human—educating the new auditor about company operations and accounting matters. Individuals we spoke with expressed concern not only that these additional audit, selection, and support costs are ultimately passed on to shareholders but also that audit committees may lose control of selecting the best auditors to provide the best quality to shareholders since the incumbent firm would not be eligible to compete to provide audit services for some period of time. Some individuals we spoke with noted that they have already observed a heightened sense of corporate responsibility and better corporate governance as a result of a change in behavior brought about by the large corporate failures in recent years. Overall, the majority of knowledgeable individuals we spoke with believe that a requirement for mandatory audit firm rotation should not be implemented at this time. However, some individuals suggested that regulators could require a change in the auditor of record as an enforcement action if conditions warrant such a measure. Most individuals we spoke with believe that the cost of requiring mandatory audit firm rotation would exceed the benefits because of the various practical concerns noted. Rather, these individuals believe that greater experience with the existing provisions of the Sarbanes-Oxley Act should be gained and the results assessed before the need for the mandatory audit firm rotation is considered. Many individuals we spoke with believe that individual Sarbanes-Oxley Act provisions, such as audit firm partner rotation and the increased responsibilities of the audit committee, are not a substitute for mandatory audit firm rotation, but taken collectively, they could accomplish many of the same intended benefits of mandatory audit firm rotation to improve auditor independence and audit quality. For example, some individuals believe that the existing Sarbanes- Oxley Act provisions related to audit committees have already resulted in more time spent on audit committee activities and greater contact and frequency of meetings with auditors. These individuals commented that audit committees now ask more questions of auditors because of a heightened sense of accountability for the performance, accuracy, reliability, and integrity of everything the independent auditors are doing. If mandatory audit firm rotation were required a number of implementing factors affecting the structure of the requirement would need to be decided by policy makers (e.g., the Congress and regulators). The following provides the views of Tier 1 firms, Fortune 1000 public companies, and their audit committee chairs on certain implementing factors, regardless of whether they supported mandatory audit firm rotation. Most believed that the auditor of record’s tenure should be limited to either 5 to 7 years or 8 to 10 years. Nearly all believed that when the incumbent auditor of record is replaced, the public accounting firm should not be permitted to compete for audit services for either 3 or 4 years or 5 to 7 years. Nearly all believed that the audit committee should be permitted to terminate the business relationship with the auditor of record at any time if it is dissatisfied with the firm’s performance. Likewise, most believed that the public accounting firm should be able to terminate its relationship with the audit committee/public company at any time if it is dissatisfied with the working relationship. Nearly all believed that implementation of mandatory audit firm rotation should be staggered on a reasonable basis to avoid a significant number of public companies changing auditors simultaneously. Most Tier 1 firms believed that mandatory audit firm rotation should not be applied uniformly to all public companies regardless of their nature or size. In contrast, most Fortune 1000 public companies and their audit committee chairs believed mandatory audit firm rotation should be applied to all public companies regardless of nature or size. However, most other domestic and mutual fund companies that responded to our survey believed mandatory audit firm rotation should not be applied uniformly, and their audit committee chairs who responded to our survey were split on the subject. The Tier 1 firms and Fortune 1000 audit committee chairs who believed that mandatory audit firm rotation should not be applied uniformly more frequently selected the larger public companies rather than the smaller public companies to be subject to mandatory audit firm rotation. However, Fortune 1000 public companies were divided on their selection of sizes of public companies that should be subject to mandatory audit firm rotation. See appendix II for additional details of the responses. Our research of studies, other documents, and survey development activities concerning issues related to mandatory audit firm rotation identified the following other practices for potentially enhancing auditor independence and audit quality: the audit committee periodically holding an open competition for requiring audit managers to periodically rotate off the engagement for providing audit services to the public company, the audit committee periodically obtaining the services of a public accounting firm to assist it in overseeing the financial statement audit or to conduct a forensic audit in areas of the public company’s financial statement process that present a risk of fraudulent financial reporting, and the audit committee hiring the auditor of record on a noncancelable multiyear basis in which only the public accounting firm could terminate the business relationship for cause during the contract period. Although many Tier 1 firms, Fortune 1000 public companies, and their audit committee chairs saw some benefit in each of the alternative practices, in general, they most frequently reported that the alternative practices would have limited or little benefit. The most notable exception involved the practice in which an audit committee would hire an auditor of record on a noncancelable multiyear basis, for which most Fortune 1000 public companies and their audit committee chairs reported that the practice would have no benefit. (See table 5 in app. III.) Regarding practices other than mandatory audit firm rotation that may have potential value to enhance auditor independence and audit quality, the Sarbanes-Oxley Act provides the PCAOB with the authority to set auditing and related attestation, ethics, independence, and quality control standards for registered public accounting firms and for conducting inspections to determine compliance of each registered public accounting firm with the rules of the PCAOB, the SEC, or professional standards in connection with the performance of audits, the issuance of audit reports, and related matters involving public companies. In that respect, the PCAOB’s inspection program for registered public accounting firms could also provide the PCAOB with the opportunity to provide a “fresh look” at the auditor of record’s performance regarding auditor independence and audit quality. For example, the inspections could include factors potentially affecting auditor independence, such as length of the auditor’s tenure, partners or managers of the audit firm who recently left the firm and are now employed by the public company in financial reporting roles, and nonaudit services provided by the auditor of record, as suggested by the Conference Board Commission on Public Trust and Private Enterprise in its January 9, 2003, report. Also, the inspections could consider the auditor’s work in high-risk areas of the public company’s operations and related financial reporting. Further, the inspections can serve to provide some degree of transparency of their overall results and enforcement of PCAOB and SEC requirements that may be useful for audit committees to consider. With the dissolution of Arthur Andersen LLP in 2002, Tier 1 firms reported replacing Anderson, as auditor of record, for more than 1,200 public company clients since December 31, 2001. Such volume of change in auditors provided an unprecedented opportunity to gain some actual experience with the potential value of the “fresh look” provided by a new auditor. Since many of these public companies had to replace Andersen as their auditor of record during 2002, the number of changes in their auditor of record effectively represented a partial form of mandatory audit firm rotation. We identified all annual restatements of financial statements filed on a Form 10-KA and any annual restatements included in an annual Form 10K filing with the SEC by Fortune 1000 public companies for 2001 and 2002 through August 31, 2003, and focused on which restatements were attributable to errors or fraud where the previous financial statements did not comply with GAAP and identified whether there was a change in the auditor of record. We found that 28, or 2.9 percent, of the 960 Fortune 1000 public companies changed their auditor of record during 2001, and 204, or 21.3 percent, of the companies changed their auditor during 2002. The significant increase from 2001 through 2002 was primarily due to the dissolution of Andersen. Our analysis showed that the Fortune 1000 public companies filed 43 restatements during those 2 years that were due to errors or fraud. The financial statements affected ranged from years 1997 to 2002. The misstatement rates of these public companies’ previously issued statements of net income ranged from a 6.7 percent overstatement of net income for 2000 to a 37.0 percent understatement of net loss for 2001. The restatement rates due to errors or fraud among the 43 Fortune 1000 public companies that changed their auditor of record were 10.7 percent in 2001 and 3.9 percent in 2002 compared to restatement rates of 2.5 percent in 2001 and 1.2 percent in 2002 for companies that did not change auditors. Although the data indicate that the overall restatement rate is approximately 4.5 times higher for 2001 and 3.25 times higher for 2002 for the companies that changed their auditor of record as compared to those companies that did not change auditors, caution should be taken as further analysis would be needed to determine whether the restatements are associated with the “fresh look” attributed to mandatory audit firm rotation. In that respect, for the majority of the restatements, the public information filed with the SEC and included in the SEC’s Electronic Data Gathering, Analysis, and Retrieval (EDGAR) system did not provide sufficient information to determine whether company management, the auditor of record, or regulators identified the error or fraud, and in those cases in which there was a change in the auditor of record, whether the predecessor auditor or the successor auditor identified the problem and whether it was identified before or after the change in auditor of record. Also, the recent corporate financial reporting failures have greatly increased the pressures on company management and their auditors regarding honest, fair, and complete financial reporting. See appendix IV for additional details of our analysis. Regarding further analysis to determine whether restatements are associated with the “fresh look,” we believe such additional future research could potentially add value to better predict the benefits of mandatory audit firm rotation and the future need for mandatory audit firm rotation. See the observations section of this report for our views on mandatory audit firm rotation considering the Sarbanes-Oxley Act’s requirements for enhancing auditor independence and audit quality and other factors to consider in evaluating the need for mandatory audit firm rotation. To obtain other countries’ current or previous experience with or consideration of mandatory audit firm rotation, we surveyed the securities regulators of the Group of Seven Industrialized Nations (G-7), which included the United Kingdom, Germany, France, Japan, Canada, and Italy. In addition to the G-7 countries’ securities regulators, we also surveyed the following members of the International Organization of Securities Commissions (IOSCO): Australia, Austria, Belgium, Brazil, China, Hong Kong, Luxembourg, Mexico, the Netherlands, Singapore, Spain, Sweden, and Switzerland. The IOSCO members represent these foreign countries’ organizations with duties and responsibilities which are similar to the SEC in the United States. We received responses from 11 of the 19 countries’ securities regulators surveyed. Italy and Brazil reported having mandatory audit firm rotation for public companies, and Singapore reported the requirement for banks that are incorporated in Singapore. Austria also reported that beginning in 2004, mandatory audit firm rotation will be required for the auditor of record of public companies. Spain and Canada reported that they previously had mandatory audit firm rotation requirements. Generally, reasons reported for requiring mandatory audit firm rotation related to auditor independence, audit quality, or increased competition for providing audit services. Reasons for abandoning the requirements for mandatory audit firm rotation related to its lack of cost-effectiveness, cost, and having achieved the objective of increased competition for audit services. Many of the survey respondents also reported either requiring or considering audit partner rotation requirements that are similar to the requirements of the Sarbanes-Oxley Act. See appendix V for additional information on the survey respondents’ experiences and consideration of mandatory audit firm rotation and audit partner rotation. The Sarbanes-Oxley Act contains significant reforms intended to enhance auditor independence and audit quality, which are viewed by the groups of stakeholders we surveyed or held discussions with as likely to sufficiently achieve the same intended benefits as mandatory audit firm rotation when fully implemented. In that respect, the SEC’s regulations to implement the auditor independence and audit quality requirements of the act have only recently been issued, and the PCAOB is in the process of implementing its inspection program. Therefore, we believe it will take at least several years to gain some experience with the effectiveness of the act’s requirements concerning auditor independence and audit quality. We believe that it is critical for both the SEC and the PCAOB, through its oversight and enforcement programs, to formally monitor the effectiveness of the regulations and programs intended to implement the Sarbanes-Oxley Act. This information will be valuable in considering whether changes, including mandatory audit firm rotation, may be needed to further protect the public interest. We noted that survey responses from Tier 1 firms show that the potential for lawsuits or regulatory action is a major incentive for the firms to appropriately deal with public company management in resolving financial reporting issues. We believe that the SEC’s and PCAOB’s rigorous enforcement of regulations and other requirements will be critical to the effectiveness of the act’s requirements. It is clear that the likely additional costs associated with mandatory rotation have influenced the views of Tier 1 firms and Fortune 1000 public companies and their audit committee chairs to not support mandatory rotation. However, we believe that these additional costs need to be balanced with the need to protect the public interest, especially considering the recent significant accountability breakdowns and their impact on investors and other interested parties. Although expecting to have zero financial reporting/audit failures is not a realistic expectation, Enron, WorldCom, and others have recently demonstrated that a single financial reporting/audit failure of a major public company can have significant consequences to shareholders and other interested parties. We believe it is fairly certain that mandatory audit firm rotation would result in selection costs and additional support costs for public companies. Also, most Tier 1 firms and Fortune 1000 public companies believe that mandatory audit firm rotation would also result in higher audit fees, primarily due to higher initial years’ audit costs. If public accounting firms under mandatory audit firm rotation have (1) a shorter tenure as auditor of record to recover higher initial year audit costs and (2) fewer opportunities to also sell nonaudit services due to the Sarbanes-Oxley Act requirements concerning prohibited nonaudit services, then we believe it is reasonable to assume, as public accounting firms and public companies have done, that the higher initial year audit costs associated with a new auditor are likely to be passed on to the public companies, along with increased marketing costs. However, competition among public accounting firms for providing audit services should to some extent also affect audit fees. Therefore, we believe it is uncertain at this time how these dynamics would play out in the market for audit services and their effect on audit fees over the long term. However, if intensive price competition were to occur, the expected benefits of mandatory audit firm rotation could be adversely affected if audit quality suffers due to audit fees that do not support an appropriate level of audit work. We believe that mandatory audit firm rotation may not be the most efficient way to enhance auditor independence and audit quality, considering the costs of changing the auditor of record and the loss of auditor knowledge that is not carried forward to the new auditor. We also believe that the potential benefits of mandatory audit firm rotation are harder to predict and quantify while we are fairly certain there will be additional costs. In that respect, mandatory audit firm rotation is not a panacea that totally removes pressures on the auditor in appropriately resolving financial reporting issues that may materially affect the public companies’ financial statements. Those pressures are likely to continue even if the term of the auditor is limited under any mandatory rotation process. Furthermore, most public companies will only use the Big 4 firms for their auditor of record for a variety of reasons, including the firms’ having sufficient industry knowledge and resources to audit their companies and expectations of the capital markets to use Big 4 firms. These public companies may only have 1 or 2 choices for their auditor of record under any mandatory rotation system. However, over time a mandatory audit firm rotation requirement may result in more firms transitioning into additional industry sectors if the market for such audits has sufficient profit margins. The current environment has greatly increased the pressures from regulators and investors on public company management and public accounting firms to have financial statements issued by public companies that comply with GAAP and provide full disclosure. These pressures and the reforms of the Sarbanes-Oxley Act provide incentives to have financial reporting that is honest, fair, and complete and that serves the public interest. If such reporting is widely and consistently achieved then the likelihood of the “fresh look” serving to identify financial reporting issues that may materially affect financial statements that were either overlooked or not appropriately dealt with by the previous auditor of record will be reduced. However, it is uncertain at this time if the current climate and pressures for accurate and complete financial reporting and for restoring public trust will be sustained over the long term. Regarding the need for mandatory audit firm rotation, we believe the most prudent course at this time is for the SEC and the PCAOB to monitor and evaluate the effectiveness of the Sarbanes-Oxley Act’s requirements for enhancing auditor independence and audit quality, and ultimately restoring investor confidence. In that respect, the PCAOB’s inspection program for registered public accounting firms could also provide an opportunity to provide a “fresh look,” which would enhance auditor independence and audit quality through the program’s inspection activities, and may provide new insights regarding (1) public companies’ financial reporting practices that pose a high risk of issuing materially misstated financial statements for the audit committees to consider and (2) possibly either using the auditor of record or another firm to assist in reviewing these areas. In addition, future research on the potential benefits of mandatory audit firm rotation as suggested by our analysis of restatements of financial statements may also be valuable to consider along with the evaluations of the effectiveness of the Sarbanes-Oxley Act. Further, we also believe that currently audit committees, with their increased responsibilities under the Sarbanes-Oxley Act, can play a very important role in enhancing auditor independence and audit quality. In that respect, the Conference Board Commission on Public Trust and Private Enterprise in its January 9, 2003, report stated that auditor rotation is a useful tool for building shareholder confidence in the integrity of the audit and of the company’s financial statements. The commission advocated that audit committees consider rotating audit firms when there are circumstances that could call into question the audit firm’s independence from management. The circumstances that merited consideration included when (1) significant nonaudit services are provided to the company by the auditor of record (even if they have been approved by the audit committee), (2) one or more former partners or managers of the audit firm are employed by the company, or (3) lengthy tenure of the auditor of record, such as over 10 years—which our survey results show is prevalent at many Fortune 1000 public companies. In such cases, we believe audit committees need to be especially vigilant in the oversight of the auditor and in considering whether a “fresh look” is warranted. We also believe that if audit committees regularly evaluate whether audit firm rotation would be beneficial, given the facts and circumstances of their companies’ situation, and are actively involved in helping to ensure audit independence and audit quality, many of the intended benefits of audit firm rotation could be realized at the initiative of the audit committee rather than through a mandatory requirement. However, audit committees need to have access to adequate resources, including their own budgets, to be able to operate with the independence necessary to effectively perform their responsibilities under the Sarbanes- Oxley Act. Further, we believe that an audit committee’s ability to operate independently is directly related to the independence of the public company’s board of directors. It is not realistic to believe that audit committees will unilaterally resolve financial reporting issues that materially affect a public company’s financial statements without vetting those issues with the board of directors. Also, the ability of the board of directors to operate independently may also be affected in corporate governance structures where the public company’s chief executive officer also serves as the chair of the board of directors. Like audit committees, boards of directors also need to be independent and to have adequate resources and access to independent attorneys and other advisors when they believe it is appropriate. Finally, for any system to function effectively, there must be incentives for parties to do the right thing, adequate transparency to provide reasonable assurance that people will do the right thing, and appropriate accountability when people do not do the right thing. We provided copies of a draft of this report to the SEC, AICPA, and PCAOB for their review. Representatives of the AICPA and the PCAOB provided technical comments, which we have incorporated where applicable. Representatives of the SEC had no comments. We are sending copies of this report to the Chairman and Ranking Minority Member of the House Committee on Energy and Commerce. We are also sending copies of this report to the Chairman of the Securities and Exchange Commission, the Chairman of the Public Company Accounting Oversight Board, and other interested parties. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-9471 or John J. Reilly, Jr., Assistant Director, at (202) 512-9517. Key contributors are acknowledged in appendix VI. As mandated by Section 207 of the Sarbanes-Oxley Act of 2002 and as agreed with your staff, to perform our study and review of the potential effects of requiring mandatory rotation of registered public accounting firms, we 1. identified and reviewed research studies and related literature that addressed issues concerning auditor independence and audit quality associated with the length of a public accounting firm’s tenure and the costs and benefits of mandatory audit firm rotation; 2. analyzed the issues we identified to develop detailed questionnaires to obtain the views of public accounting firms and public company chief financial officers and their audit committee chairs on the potential effects of mandatory audit firm rotation, hold discussions with officials of other interested stakeholders, such as institutional investors, federal banking regulators, U.S. stock exchanges, state boards of accountancy, the American Institute of Certified Public Accountants (AICPA), the Securities and Exchange Commission (SEC), and the Public Company Accounting Oversight Board (PCAOB), to obtain their views on the issues associated with mandatory audit firm rotation, and obtain information from other countries on their experiences with mandatory audit firm rotation; and 3. identified restatements of annual 2001 and 2002 financial statements of Fortune 1000 public companies due to errors or fraud that were reported to the SEC during 2002 and 2003 through August 31, 2003, to determine whether the restatement occurred before or after a change in the public companies’ auditor of record, and test the value of the “fresh look” commonly attributed to mandatory audit firm rotation. We conducted our work in Washington, D.C., between November 2002 and November 2003 in accordance with U.S. generally accepted government auditing standards. To identify existing research related to mandatory audit firm rotation, we utilized several methods including general Internet searches, requests from the AICPA library, the AICPA’s Web site (www.aicpa.org), the American Accounting Association’s Web site (http://accounting.rutgers.edu/raw/aaa/), the SEC’s Web site (www.sec.gov), requests from GAO’s internal library resources, and suggestions provided by communities of interest. Also, many studies were identified through bibliographies of previously identified research. We used the following keywords in our searches: “mandatory audit firm rotation,” “mandatory auditor rotation,” “compulsory audit firm rotation,” “compulsory auditor rotation,” “auditor rotation,” “auditor change(s),” and “auditor switching.” We identified a total of 80 studies, articles, position papers, and reports from our searches. We then applied the following criteria to these studies. We focused on studies that (1) were mostly published no earlier than 1980, (2) contained some original data analyses, and (3) focused on some aspect of mandatory audit firm rotation. Using these criteria, 27 studies were subjected to further methodological review to evaluate the design and approach of the studies, the quality of the data used, and the reasonableness of the studies’ conclusions and to determine if any limitations of a study were of sufficient severity to call into question the reasonableness of the conclusion. We eliminated 10 of these studies because they were actually position papers or literature summaries, and did not include any original data analyses. One additional study was eliminated because of fundamental methodological flaws. Of the remaining 16 studies that were subjected to a high-level methodological review, 7 have major caveats that should be considered along with the results of the studies, while the other 9 have some more minor methodological limitations, such as limited application to the subject; limited data availability; or insufficient information on issues including choice of samples, response rates, and nonresponse analyses. In developing the survey instruments covering issues concerning auditor independence and audit quality associated with the length of a public accounting firm’s tenure and the costs and benefits of mandatory audit firm rotation, we primarily used the studies from among this latter group of 9 as listed below. The Relationship of Audit Failures and Audit Tenure, by Jeffrey Casterella of Colorado State University, W. Robert Knechel of University of Florida and University of Auckland, and Paul Walker of the University of Virginia, November 2002. Auditor Rotation and Retention Rules: A Theoretical Analysis (Rotation Rules), by Eric C. Weber of Northwestern University, June 1998. Audit-Firm Tenure and the Quality of Financial Reports, by Van E. Johnson of Georgia State University, Inder K. Khurana of the University of Missouri-Columbia, and J. Kenneth Reynolds of Louisiana State University, Winter 2002. “The Effects of Auditor Change on Audit Fees: Tests of Price Cutting and Price Recovery”, The Accounting Review, by D.T. Simon, and J.R. Francis, April 1988. “Does Auditor Quality and Tenure Matter to Investors?” Evidence from the Bond Market. Sattar Mansi of Virginia Polytechnic Institute, William F. Maxwell of University of Arizona, and Darius P. Miller of Kelley School of Business, February 2003 paper under revision for the Journal of Accounting Research. An Analysis of Restatement Matters: Rules, Errors, Ethics, for the Five Years Ended December 31, 2002, The Huron Consulting Group, January 2003. The Commission on Auditors’ Responsibilities: Report of Tentative Conclusions, The Cohen Commission (an independent commission established by the AICPA), 1977. (Limited to Section 9, “Maintaining the Independence of Auditors, Rotation of Auditors”). “Audit Fees and Auditor Change; An Investigation of the Persistence of Fee Reduction by Type of Change”, Journal of Business Finance and Accounting, by A. Gregory, and P. Collier, January 1996. “Auditor Changes and Tendering: UK Interview Evidence”, Accounting, Auditing and Accountability Journal, v11n1, V. Beattie, and S. Fearnley, 1998. We analyzed the issues identified from our review of studies, articles, position papers, and reports to develop an understanding of the background and related advantages and disadvantages of mandatory audit firm rotation. We developed three separate survey instruments incorporating a variety of issues related to auditor independence, audit quality, mandatory audit firm rotation and the potential effects on audit costs, audit fees, audit quality, audit risk, and competition that may arise with a mandatory audit firm rotation requirement. In addition, these survey instruments solicited views on the impact of specific provisions of the Sarbanes-Oxley Act intended to enhance auditor independence and audit quality, other practices for enhancing audit quality, views on implementing mandatory audit firm rotation, and overall opinions on requiring mandatory audit firm rotation. We performed field tests of the survey instruments to help ensure that the survey questions would be understandable to different groups of respondents, eliminate factual inaccuracies, and obtain feedback and recommendations to improve the surveys. We took the feedback and comments we received into consideration in developing our final survey instruments. Specifically, during March and April of 2003, we performed field tests of the survey instrument for public accounting firms with eight different public accounting firms, including two of the Big 4 firms, two national firms, and four regional or local firms. During May 2003, we conducted field tests of the survey instrument for public company chief financial officers with four public companies, including two Fortune 1000 companies and two commercial banks not included in the Fortune 1000. We tailored the survey instrument for public company audit committee chairpersons by incorporating the feedback and comments we received from the chief financial officers during the field tests we performed with public companies. Section 207 of the Sarbanes-Oxley Act mandated that GAO study the potential effects of mandatory audit firm rotation of registered public accounting firms, referring to public accounting firms that would be registered with the new PCAOB. During the January 2003 time frame when we were framing the population, since the PCAOB was in the process of getting organized and becoming operational, there were no public accounting firms registered with the PCAOB at that time. Therefore, we coordinated with the AICPA to establish a population of public accounting firms that would most likely register with the PCAOB. The AICPA provided a complete list of the more than 1,100 public accounting firms that were registered with the AICPA’s Securities and Exchange Commission Practice Section (SECPS) as of January 2003. Prior to the restructuring of the SECPS, AICPA bylaws required that all members that engage in the practice of public accounting with a firm auditing one or more SEC clients join the SECPS. Public accounting firms that did not have any SEC clients could join the SECPS voluntarily. Based on the information submitted in their 2001 annual reports, these SECPS member firms collectively had nearly 15,000 SEC clients. Therefore, the public accounting firms registered with the SECPS at that time were used to frame an alternative source of public accounting firms that perform audits of issuers registered with the SEC. Based on the AICPA-provided SECPS membership list and the number of SEC clients reported in these SECPS member firms’ 2001 annual reports, of 1,117 SECPS members, 696 firms had 1 or more SEC clients and 421 firms were SECPS members but did not audit any public companies. The 696 members of the SECPS collectively audited 14,928 of the 17,956 issuers registered with the SEC. Since approximately 3,000 issuers were audited by public accounting firms that were not members of the SECPS, we obtained a list from the SEC that included the names of over 1,000 public accounting firms that performed the audits of public companies registered with the SEC. We compared the 696 SECPS member firms to all of the public accounting firms that were included in the SEC’s list in order to identify the non-SECPS member public accounting firms, which were mainly consisted of foreign public accounting firms or domestic firms that are not AICPA members. Since the PCAOB has indicated that it will not exempt foreign public accounting firms that audit issuers registered with the SEC from registering with the PCAOB, we included non-SECPS member public accounting firms that reported to the SEC that they had 10 or more SEC clients in the population. In order to identify differences in views on the potential effects of mandatory audit firm rotation for respondents that vary based on the size of the public accounting firm, location (e.g., domestic versus foreign firms) and other factors, we stratified the population into three tiers based on the number of SEC clients reported to the SECPS in the SECPS member firms’ 2001 annual reports and the aforementioned SEC data for non-SECPS member public accounting firms: 1. Tier 1 firms: 92 SECPS member and 5 non-SECPS public accounting firms that had 10 or more 2001 SEC clients in 2001, 2. Tier 2 firms: 604 SECPS member firms that had from 1 to 9, 2001 SEC clients in 2001, and 3. Tier 3 firms: 421 SECPS member firms that reported having no SEC clients. The basis for selecting public accounting firms with 10 or more SEC clients was twofold. First, the 92 SECPS member firms included in Tier 1 collectively had approximately 90 percent of all of the SEC clients reported to the SECPS in the member firms’ 2001 annual reports. Second, the public accounting firms with 10 or more SEC clients were viewed to collectively have the most experience and knowledge about changing auditors for public company clients and accordingly were considered to have a great interest in the potential effects of mandatory audit firm rotation. Tier 2 was established because the 604 SECPS member firms with 1 to 9 SEC clients comprises approximately 10 percent of the total SEC clients reported to the the SECPS in member firms’ 2001 annual reports and were also considered to have a great interest in, as well as important views on, the potential effects of mandatory audit firm rotation based on their experience and knowledge of being auditors for public companies. Lastly, we included the 421 SECPS member public accounting firms that had no SEC clients in Tier 3 of our population in order to determine if there would be greater or less interest in providing financial statement audit services to public companies if mandatory audit firm rotation were required. We requested that the public accounting firms’ chief executive officers or managing partners, or their designated representatives, complete the survey. In order to conduct our survey, we selected a 100 percent certainty sample of Tier 1 public accounting firms consisted of all 92 SECPS member firms and all 5 non-SECPS member firms. In addition, we selected separate random samples from each of the two remaining strata. We created two separate Web sites for the public accounting firm surveys. The top tier firms were surveyed independently of the second and third tiers because the Tier 1 survey was administered jointly with another study dealing with consolidation of major public accounting firms since 1989 as mandated by Section 701 of the Sarbanes-Oxley Act. The survey for the Tier 2 and Tier 3 firms, which dealt only with the potential effects of mandatory audit firm rotation, was created at a separate Web site. A unique password and user ID was assigned to each selected public accounting firm in our sample to facilitate completion of the survey online. The surveys were made available to the Tier 1 firms during the week of May 27, 2003, and the surveys to the Tier 2 and Tier 3 firms were made available during the week of June 12, 2003. Both survey Web sites remained open until September 2003. Responses to surveys completed online were automatically stored on GAO’s Web sites. From August through September 2003, we performed follow-up efforts to increase the overall response rates by telephoning the selected public accounting firms that had not completed the survey, and requested that they take advantage of the opportunity to express their views on this important issue by doing so. Lastly, in order to gain knowledge about whether the views of the Tier 1 public accounting firms that did not complete our survey were materially different from the overall views of the Tier 1 public accounting firms that completed our survey, we asked the following key questions of those public accounting firms that did not complete our survey and that we contacted during our telephone follow-up efforts. Specifically, we asked whether their firms believed the benefits of mandatory audit firm rotation would exceed the costs of implementing such a requirement and whether their firms would support requiring mandatory audit firm rotation. As more fully described in the body of this report, the overall views expressed by the Tier 1 public accounting firms that completed our survey generally indicated that the costs of mandatory audit firm rotation would exceed the benefits and that their firms were not in favor of supporting such a requirement. The views of the Tier 1 public accounting firms that did not complete our survey and that we contacted in our telephone follow-up efforts were generally consistent with the overall views of the Tier 1 public accounting firms that completed our survey. As disclosed in our survey instruments, all survey results were to be compiled and presented in summary form only as part of our report, and we will not release individually identifiable data from these surveys, unless compelled by law or required to do so by the Congress. We received responses from 74 of the 97 Tier 1 firms, or 76.3 percent. Because of the more limited participation of Tier 2 firms (85, or 30.1 percent) and Tier 3 firms (52, or 21.9 percent) in our survey, we are not projecting their responses to the population of firms in these tiers. The presentation of this report focuses on the responses from the Tier 1 firms, but any substantial differences in their overall views and those reported to us by either Tier 2 or 3 firms are discussed where applicable. Table 2 summarizes the population, sample sizes, and overall responses received for all three tiers of public accounting firms surveyed on the potential effects of mandatory audit firm rotation. As a part of fulfilling our objective to study the potential effects of mandatory audit firm rotation, we obtained the views on the advantages and disadvantages and related costs and benefits from a random sample of chief financial officers and audit committee chairs of public companies registered with the SEC. We solicited the views of chief financial officers of public companies because they were considered to be very knowledgeable about the issues involving financial statement audits of public companies. We also solicited the views of audit committee chairs because under the Sarbanes-Oxley Act, the audit committee has expanded responsibilities for monitoring and overseeing public companies’ financial reporting and the financial statement audit process. We obtained such views by administering a survey to randomly selected samples of public company chief financial officers and their audit committee chairs. Section 207 of the Sarbanes-Oxley Act defines “mandatory rotation” as the imposition of a limit on the period of years for which a particular registered public accounting firm may be the auditor of record for a particular issuer. Therefore, in framing the population from which we planned to draw our sample of public companies, we researched what the definition of an “issuer” is with the SEC, with GAO’s General Counsel, and the AICPA’s SECPS. The primary purpose of conducting this research was to determine whether mutual funds (or mutual fund complexes) and other types of investment companies such as investment trusts, should be included in the population. Based on discussions with the Director of the SEC’s Office of Investment Management, mutual funds and investment trusts are issuers that are required to file periodic reports with the SEC under the Securities Exchange Act of 1934 or the Investment Company Act of 1940. Also, officials in the SEC’s Office of Investment Management indicated that there are nearly 10,000 individual mutual funds grouped into 877 mutual fund complexes (also known as families). A mutual fund complex is responsible for hiring the auditor of record, either collectively or individually, for the individual mutual funds that are included in the family or complex. As such, investment trusts and the 877 mutual fund complexes were included in our population for the purpose of administering our survey. We obtained lists of public company issuers from the SEC in developing the population as follows: The SEC’s Office of Corporation Finance provided a list of 17,079 public companies from the SEC’s Electronic Data Gathering, Analysis, and Retrieval (EDGAR) system. This list included registrants that were listed as current issuers registered with the SEC as of February 2003 and included 14,938 domestic public companies (including investment trusts) and 2,141 foreign public companies (i.e., companies that are domiciled outside of the United States but are registered with the SEC). Our comparison of this SEC list to a separate list of Fortune 1000 companies identified an additional 32 public companies that were added to the original list of 17,079, bringing the total population to an adjusted total of 17,111. As noted above, we also obtained a complete list of 877 mutual fund complexes from the SEC that included current issuers registered with the SEC’s Office of Investment Management. Therefore, the population of public company issuers as of February 2003 totaled 17,988, consisted of 17,111 public companies and 877 mutual fund complexes. In order to identify differences in views on the potential effects of mandatory audit firm rotation based on differences in company industry, size, or geographic location, we stratified the population into the following three strata: (1) domestic Fortune 1000 companies, (2) other (non-Fortune 1000) domestic companies and mutual fund complexes, and (3) foreign companies. Fortune 1000 stratum: Based on Fortune’s list of the Fortune 1000 as of March 2003, we identified 960 public companies in the Fortune 1000; the remaining 40 companies were privately owned. Since private companies are not subject to SEC rules or the Sarbanes-Oxley Act’s provisions, these 40 companies were not included in the stratum. We used the file provided by the SEC listing the 17,079 domestic and foreign public companies to extract a separate stratum of the 960 public companies in the Fortune 1000. In addition, in comparing Fortune’s list of the Fortune 1000 to the SEC’s listing of public companies, we identified 32 additional companies that were included in the Fortune 1000 but which were not included in the SEC list. In connection with framing the Fortune 1000 stratum, we added these 32 companies to the list of domestic and foreign public companies provided to us by the SEC to ensure that it was complete. Foreign company stratum: Using the “state code” identifier included in the adjusted SEC list of 17,111 domestic and foreign public companies, we extracted a separate stratum of 2,141 foreign companies. Other domestic companies and mutual fund complexes stratum: After extracting the 960 domestic Fortune 1000 public companies and the 2,141 foreign public companies from the adjusted SEC list of 17,111 domestic and foreign public companies, a separate stratum of 14,010 non-Fortune 1000 public companies was created from the SEC file representing the “other domestic” public companies. These 14,010 other domestic public companies (which included investment trusts) were combined with the 877 mutual fund complexes provided by the SEC’s Office of Investment Management to create a total population for this stratum of 14,887. In order to conduct these surveys, we selected a separate random sample from each of the three public company strata. We mailed a survey package to the chief financial officer of each public company issuer included in our sample. This survey package provided the chief financial officer with the option of completing the enclosed hard copy of the survey and returning it in the mail to our Atlanta Field Office or of completing the survey online. We created a Web site with the public company survey for the chief financial officers. A unique password and user ID was assigned to each selected company in our sample of companies to facilitate completion of the survey online. In addition, a separate survey directed to the chair of the audit committee (or head of an equivalent body) was included in the mail survey package. The chief financial officer was asked to forward this survey to the audit committee chair. The survey for the public company audit committee chairs was not made available online. As such, these surveys could only be completed on hard copy and returned to our Atlanta Field Office. The survey packages were mailed to all 1,171 public companies in June 2003. The survey Web site for public company chief financial officers remained open until September 2003. The cutoff date for accepting mailed surveys from public company chief financial officers and audit committee chairs was September 2003. Responses to surveys completed online were automatically stored into GAO’s Web sites, and mailed survey responses of chief financial officers and audit committee chairs were entered into a separate compilation database by GAO contractor personnel who were hired to perform such data inputting. From August through September 2003, we also performed follow-up efforts to increase the overall response rates by telephoning public company chief financial officers, who had not completed or returned the survey, and requesting that the chief financial officer and the audit committee chair complete our survey and return it to us. As disclosed in our surveys, all survey results were to be compiled and presented in summary form only as part of our report, and we will not release individually identifiable data from these surveys, unless compelled by law or required to do so by the Congress. Of the 330 Fortune 1000 public companies sampled, we received responses from 201, or 60.9 percent, of their chief financial officers and 191, or 57.9 percent, of their audit committee chairs. Because of limited participation of the other domestic companies and mutual funds (131, or 29.1 percent, of their chief financial officers and 96, or 21.3 percent, of their audit committee chairs) and the foreign public companies (99, or 25.3 percent, of their chief financial officers and 63, or 16.1 percent, of their audit committee chairs), we are not projecting their responses to the population of companies in these strata. The presentation of this report focuses on the responses from the Fortune 1000 public companies’ chief financial officers and their audit committee chairs, but any substantial differences in their overall views and those reported to us by the other groups of public companies we surveyed is discussed where applicable. Tables 3 and 4 summarize the population, sample size, and survey responses received for all three strata of public company chief financial officers and their audit committee chairs surveyed on the potential effects of mandatory audit firm rotation. We initially requested information from all 97 Tier 1 firms (firms with 10 or more SEC clients). We received responses from 74 of them. We conducted follow-up with a limited number of the nonrespondents and did not find substantive differences between the respondents and the nonrespondents on key questions related to mandatory audit firm rotation. We requested information from 330 Fortune 1000 public companies and their audit committee chairs and received 201 and 191 responses from them, respectively. While we did not conduct follow-up with the nonrespondents from our surveys of Fortune 1000 public companies and their audit committee chairs, we had no reason to believe that respondents and nonrespondents to our original samples from these strata would substantively differ on issues related to mandatory audit firm rotation. Therefore, we analyzed respondent data from the Tier 1 and Fortune 1000 public companies and their audit committees as probability samples from these respective populations. Survey results based on probability samples are subject to sampling error. Each of the three samples (Tier 1 and Fortune 1000 public companies and their audit committee chairs) is only one of a large number of samples we might have drawn from the respective populations. Since each sample could have provided different estimates, we express our confidence in the precision of our three particular samples’ results as 95 percent confidence intervals. These are intervals that would contain the actual population values for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the respective study populations. All percentage estimates from the survey of Tier 1 firms have sampling errors not exceeding +/- 7 percentage points unless otherwise noted. All percentage estimates from the surveys of Fortune 1000 public companies and their audit committee chairs have sampling errors not exceeding +/- 6 percentage points unless otherwise noted. Also, estimated percentages for subgroups of Tier 1 firms and Fortune 1000 public companies and their audit committee chairs often have sampling errors exceeding these thresholds, which are noted where they are reported. In addition, all numerical estimates other than percentages have sampling errors of not more than +/- 14 percent of the value of those numerical estimates. Despite our judgment that respondents and nonrespondents do not differ on issues related to mandatory audit firm rotation, our survey estimates may nevertheless contain errors to the extent that there truly are differences between these groups on issues related to this topic. The practical difficulties of conducting any survey also introduce other types of nonsampling errors. Differences in how a particular question is interpreted and differences in the sources of information available to respondents can also be sources of nonsampling errors. We included steps in both the data collection and data analysis stages to minimize such nonsampling errors. These steps included developing our survey questions with the aid of our survey specialists, conducting pretests of the public accounting firm and public company questions and questionnaires, verifying computer analysis by an independent analyst, and double verification of survey data entry where applicable. To supplement the responses to our survey, we identified other knowledgeable individuals associated with a broad range of communities of interest and conducted telephone or in-person discussions to obtain their views on mandatory audit firm rotation. The communities of interest included significant institutional investors (pension funds, mutual funds, and insurance companies), self-regulatory organizations (such as stock exchanges), consumer advocacy groups, regulators (state boards of accountancy, banking regulators), the AICPA, the SEC, the PCAOB, and recognized experts in corporate governance. The questions for these discussions were based on key questions from the surveys for public accounting firms and public companies. The results of the discussions were compiled and presented in summary form only as part of our report, and we will not release individually identifiable data from these discussions, unless compelled by law or required to do so by the Congress. In order to obtain other countries’ current or previous experience with or consideration of mandatory audit firm rotation, we administered surveys to the securities regulators of the Group of Seven Industrialized Nations (G- 7), which included the United Kingdom, Germany, France, Japan, Canada, and Italy. In addition to the G-7 countries’ securities regulators, we also administered surveys to the following members of the International Organization of Securities Commissions (IOSCO): Australia, Austria, Belgium, Brazil, China, Hong Kong, Luxembourg, Mexico, the Netherlands, Singapore, Spain, Sweden, and Switzerland. The IOSCO members represent these foreign countries’ organizations with duties and responsibilities similar to those of the SEC in the United States. We administered the surveys to these foreign countries’ securities regulators in December 2002. From July and through October 2003, we performed follow-up efforts to increase the overall response rates by sending e-mail messages to the foreign countries’ securities regulators in our sample who had not completed the survey and requested that they do so. We received responses from 11 of the 19 countries’ securities regulators surveyed. To obtain some insight into the potential value of the “fresh look” provided by a new auditor of record, we analyzed the rate of annual financial statement restatements reported to the SEC by Fortune 1000 public companies during 2002 and 2003 through August 31, 2003. We particularly focused on restatements for 2001 and 2002 and compared the financial statement restatement rates of those Fortune 1000 public companies that changed their auditor of record to those of Fortune 1000 public companies that did not change their auditor of record during this period. In connection with performing this analysis, we separately tracked the Fortune 1000 public companies that changed auditors from the public companies that did not change auditors during 2001 and 2002. Financial statement restatements filed for changes in accounting principles or changes in organizational business structure (e.g., stock splits, mergers and acquisitions), reclassifications, or to compliance with SEC reporting requirements are not necessarily indications of compromised audit quality or auditor independence. However, financial statement restatements due to errors or fraud raise doubt about the integrity of management’s financial reporting practices, the quality of the audit, or the auditor’s independence. Therefore, the focus of our analysis was on annual financial statement restatements (hereinafter referred to as “restatements”) due to errors or fraud. Since not all restatements are indications of errors or fraud, we reviewed Form 10-KAs (amended 10-K filings), Form 8-Ks, and any related SEC enforcement actions to determine if the restatements were due to errors or fraud. The primary purpose of this test was to determine whether the rate of restatements due to errors or fraud of companies that changed auditors was higher or lower than the rate of restatements due to errors or fraud of companies that did not change auditors. For each of the Fortune 1000 companies, we searched SEC’s EDGAR system for Form 10-KA filings submitted to the SEC during 2002 and 2003 through August 31, 2003, that amended either 2001 or 2002 financial statements to identify annual financial statement restatements. We determined if there had been a change in auditor from 2001 through 2002 by reviewing the name of the auditor of record on the audit opinion included in the Form 10-KA filed for 2001 and 2002, and also noted what type of audit opinion was issued on the 2001 and 2002 financial statements. This allowed us to identify the restatements associated with Fortune 1000 public companies that changed auditors and the restatements of Fortune 1000 public companies that did not change auditors. We compared the level of restatements for Fortune 1000 public companies that changed auditors to the level of restatements of Fortune 1000 public companies that did not change auditors. For each of the restatements identified above, we reviewed underlying Form 10-KA (amended 10-K filings), Form 8-Ks, and any related SEC enforcement actions to quantify the dollar effect of the restatements and to determine if the restatements were due to errors or fraud. We differentiated restatements caused by errors or fraud from restatements caused by changes that were not indications of compromised audit quality or auditor independence, such as changes in accounting principles, mergers, stock splits, and reclassifications using appropriate classification criteria. In addition, we attempted to ascertain from the above sources whether company management, the predecessor auditor, or the successor auditor identified the error or fraud, and where applicable, whether it was identified before or after the change in auditor. After categorizing the 2001 and 2002 Fortune 1000 public companies’ annual financial statement restatements and annual financial statement filings into (1) companies that did not change auditors and filed a restatement, (2) companies that did not change auditors and did not file a restatement, (3) companies that changed auditors and filed a restatement, and (4) companies that changed auditors and did not file a restatement, we compared the rates of restatements among and between these groups. If mandatory audit firm rotation were required, a number of implementing factors affecting the structure of the requirement would need to be decided. As a component of our surveys of public accounting firms, public companies, and their audit committee chairs, we asked them to provide their views on various implementing factors, regardless of whether they supported mandatory audit firm rotation, including the limit on the incumbent firm’s audit tenure period, the “cooling off” period before the incumbent firm could again compete to provide audit services to the public company, under what circumstances either the audit committee or the public accounting firm could terminate the relationship for providing audit services, whether mandatory audit firm rotation should be implemented on a whether mandatory audit firm rotation should be required for audits of all public companies, and if not, to which public companies should it be applied. Regarding the limit on the auditor of record’s tenure under mandatory audit firm rotation, about 47 percent of Tier 1 firms stated that the limit should be 8 to 10 years. Fortune 1000 chief financial officers and audit committee chairs selected 8 to 10 years about as often as 5 to 7 years as the limit on the auditor of record’s tenure. Tiers 2 and 3 firms and other public companies’ audit committee chairs that responded to our surveys generally favored an audit tenure of 5 to 7 years. Most Tier 1 firms and Fortune 1000 public company chief financial officers and their audit committee chairs believed the “cooling off” period under mandatory audit firm rotation should be 3 or 4 years before the auditor of record could again compete to provide audit services to the public company previously audited. Nearly all Tier 1 firms and Fortune 1000 public company chief financial officers and their audit committee chairs stated that the audit committee under mandatory audit firm rotation should be permitted to terminate the auditor of record at any time if it is dissatisfied with the public accounting firm’s performance or working relationship. Most Tier 1 firms and Fortune 1000 public company chief financial officers and their audit committee chairs also believed that the auditor of record should be able to terminate its relationship with the audit committee/public company at any time if the public accounting firm is dissatisfied with the working relationship. Nearly all Tier 1 firms and Fortune 1000 public company chief financial officers and their audit committee chairs believed that mandatory audit firm rotation should be implemented over a period of years (staggered) to avoid a significant number of public companies changing auditors simultaneously. About 70 percent of Tier 1 firms believed that mandatory audit firm rotation should not be applied uniformly for audits of all public companies regardless of their nature or size. In contrast, about 81 percent of Fortune 1000 public companies and 65 percent of their audit committee chairs believed that mandatory audit firm rotation should be applied uniformly for audits of all public companies regardless of the nature or size. Most chief financial officers of other domestic and mutual fund public companies who responded to our survey believe mandatory audit firm rotation should be applied uniformly, and their audit committee chairs were split on the subject. Comments that we received from many of the Tier 1 firms, Fortune 1000 public companies, and their audit committee chairs that supported requiring that mandatory audit firm rotation be applied uniformly generally took the view that there should be a level playing field and that the benefits and the costs of mandatory audit firm rotation should be applied to all public companies. In contrast, those who commented opposing requiring mandatory audit firm rotation for all public companies generally took the view that the smaller public companies are less complex and the costs of mandatory audit firm rotation would be more burdensome for the smaller companies. We asked those public accounting firms and public company chief financial officers and their audit committee chairs who believed mandatory audit firm rotation should not be applied uniformly to all public companies to select by company nature and size to which companies mandatory audit firm rotation should apply. Tier 1 firms and Fortune 1000 audit committee chairs more frequently selected the larger public companies. However, Fortune 1000 chief financial officers were about evenly split in their views regardless of the size of the public company. Chief financial officers and their audit committee chairs of other domestic and mutual fund public companies, as well as foreign public company chief financial officers and their audit committee chairs, who responded to our survey more frequently selected the larger public companies. We asked public accounting firms, public companies’ chief financial officers, and their audit committee chairs to provide their views on the potential value of the various following alternative practices we identified through our research and other inquiries made in developing our surveys versus the value of other than mandatory audit firm rotation for enhancing auditor independence and audit quality. The audit committee periodically holding an open competition for providing audit services: Having the audit committee periodically hold an open competition for public accounting firms to serve as the public company’s auditor of record, in which the incumbent auditor of record could also compete, could potentially enhance auditor independence and audit quality by letting the incumbent firm know that it does not have unlimited tenure as the auditor of record and a lock on the associated revenues, and that another firm may be selected to provide a “fresh look” at the company’s financial reporting process, practices, and financial statements. Also, the public company has an opportunity to see the quality of personnel that another public accounting firm could provide. However, the public company will incur some costs in holding such a competition and, if another firm is selected, may incur additional initial years’ audit fees and will have additional auditor support costs to assist the new auditor of record in understanding the company’s operations, systems, and financial reporting practices. Requiring audit managers to periodically rotate off the engagement for providing audit services to the public company: Audit manager is a senior position reporting to the engagement audit partner with responsibility for assisting the engagement audit partner in planning, conducting, and reporting on the audit of the public company’s financial statements. Larger audits will likely have multiple audit managers and audit partners participating in the audit. Conceptually, periodically changing audit managers brings a “fresh look” to the audit assignment and the associated potential benefits. However, there is an associated learning curve that is likely to cause both the public accounting firm and the public company to incur some additional costs. Some public accounting firms commented that this practice already occurs as a result of career enhancement policies and practices of the firms. The audit committee periodically obtaining the services of a public accounting firm to assist it in overseeing the financial statement audit or to conduct a forensic audit in areas of the public company’s financial reporting process that present a risk of fraudulent financial reporting: Overseeing the auditor of record’s conduct of the financial statement audit is a significant responsibility that is especially challenging depending on the size and complexity of a public company. Having another public accounting firm as needed to assist the audit committee brings a “fresh look” to help the audit committee understand the public company’s operations, systems, and financial reporting practices and the underlying internal controls and risks. Also, as areas are identified that may have greater risk of fraudulent financial reporting, the audit committee may wish to have a public accounting firm conduct a forensic audit to provide both a “fresh look” and a more penetrating audit of transactions and related internal controls and financial reporting practices in areas of high risk. Additional costs will be incurred by the audit committee, and some degree of coordination and cooperation of the incumbent audit firm will be necessary, which will also add to the audit committee’s responsibilities. Requiring that the auditor of record be hired on a noncancelable multiyear basis, although the public accounting firm could terminate the relationship for cause during the contract period: Having the audit committee hire the auditor of record on a multiyear basis that only the auditor of record can cancel potentially enhances auditor independence and audit quality by assisting the auditor in dealing with any pressures from management in appropriately dealing with financial reporting practices that may materially affect the financial statements. However, this practice takes away flexibility of the audit committee to replace the auditor of record within the period of the contract should the audit committee be dissatisfied with the auditor of record’s performance. Although many Tier 1 firms, Fortune 1000 public companies, and their audit committee chairs saw some benefit in each of the alternative practices, in general, they most frequently reported that the alternative practices would have limited or little benefit. The most notable exception involved the practice in which audit committee would hire the auditor of record on a noncancelable multiyear basis, for which most Fortune 1000 public companies and their audit committee chairs reported that the practice would have no benefit. (See table 5.) To obtain some insight into the potential value of the “fresh look” provided by a new auditor of record, we analyzed the rate of annual financial statement restatements reported to the Securities and Exchange Commission (SEC) by Fortune 1000 public companies during 2002 and 2003 through August 31, 2003. We particularly focused on restatements for 2001 and 2002 and compared the financial statement restatement rates of those Fortune 1000 public companies that changed their auditor of record to those Fortune 1000 public companies that did not change their auditor of record during this period. Historically, only about 3 percent of public companies change auditors in any given year. However, we observed that 2.9 percent (28 out of 960) of the Fortune 1000 public companies changed auditors during 2001 and 21.3 percent (204 out of 960) of the Fortune 1000 public companies changed auditors during 2002. The significant increase from 2001 through 2002 was primarily due to the dissolution of Arthur Andersen LLP in 2002, which was caused, in part, by its criminal indictment for obstruction of justice stemming from its role as auditor of Enron Corporation. Since many of these public companies had to replace Andersen as their auditor of record during 2002, the number of changes in their auditor of record effectively represented a partial form of mandatory audit firm rotation. Tables 6 and 7 summarize the occurrence of the reported Fortune 1000 public companies’ restatement filings. The combined restatement rates from tables 6 and 7 for all Fortune 1000 public companies, including those that changed auditors and those that retained their auditor of record, was 2.9 percent in 2001 (28 restatements out of the 960 Fortune 1000 public companies) and 1.9 percent in 2002 (18 restatements out of the 960 Fortune 1000 public companies). The overall restatement rates are higher in 2001 than the comparable levels of restatements observed in 2002. This may be due to the fact that our analysis was limited to restatements submitted to the SEC on Form 10-KA filings for 2001 and 2002 through August 31, 2003. Some of the Fortune 1000 public companies that had not filed restatements with the SEC as of August 31, 2003, may still do so in the future. Additionally, because some companies may require considerable amounts of time and effort to unravel complex accounting and financial reporting issues (e.g., WorldCom, which is in the process of working its way out of bankruptcy proceedings, and the Federal Home Loan Mortgage Corporation, better known as Freddie Mac, which is working to restate 3 years of previously issued financial statements), it is reasonable to expect that additional restatements will be included in Form 10-KAs or other filings that had not been submitted to the SEC as of August 31, 2003. Financial statement restatements filed for changes in accounting principles or changes in organizational business structure (e.g., stock splits, mergers and acquisitions), reclassifications, or compliance with SEC reporting requirements, referred to as “rules based changes,” are not necessarily indications of compromised audit quality or auditor independence. However, financial statement restatements due to errors or fraud raise doubt about the integrity of management’s financial reporting practices, the quality of the audits, and the auditor’s independence. Therefore, the following focus of our analysis was on annual financial statement restatements (hereinafter referred to as “restatements”) due to errors or fraud. The rate of restatement due to errors or fraud for Fortune 1000 public companies that changed auditors were 10.7 percent in 2001 and 3.9 percent in 2002 compared to restatement rates due to errors or fraud of 2.5 percent in 2001 and 1.2 percent in 2002 for companies that did not change auditors. Although the data indicate that the overall restatement rate is approximately four times higher in 2001 and three times higher in 2002 for those Fortune 1000 public companies that changed auditors than for those companies that did not change auditors, caution should be exercised as further analysis would be needed in order to determine whether the restatements are associated with the “fresh look” of the new auditor attributed to mandatory audit firm rotation. In that respect, in some cases we were able to determine from our review of the Form 10-KAs, any related Form 8-Ks, and the results of Internet news searches, that the restatements were identified as a result of an SEC investigation or an enforcement action. However, for the majority of the restatements we identified, the information included in the SEC’s EDGAR system did not provide sufficient information to ascertain whether company management, and in those cases where there was a change in auditor, the predecessor auditor, or the successor auditor identified the error or fraud and whether it was identified before or after the change in auditor. Also, the recent corporate financial reporting failures have greatly increased the pressures on management and auditors regarding honest, fair, and complete financial reporting. The phrase in an auditor’s unqualified opinion, “present fairly, in all material respects, in conformity with generally accepted accounting principles,” indicates the auditor’s belief that the financial statements taken as a whole are not materially misstated. An auditor plans an audit to obtain reasonable assurance of detecting misstatements that could be large enough, individually or in the aggregate, to be quantitatively material to the financial statements. Financial statements are materially misstated when they contain misstatements the effect of which, individually or in the aggregate, is important enough to cause them not to be presented fairly, in all material respects, in conformity with generally accepted accounting principles. As previously noted, misstatements can result from errors or fraud. As defined in Financial Accounting Standards Board Statement of Financial Concepts No. 2, materiality represents the magnitude of an omission or misstatement of an item in a financial report that, in light of surrounding circumstances, makes it probable that the judgment of a reasonable person relying on the information would have been changed or influenced by the inclusion or correction of the item. Table 8 summarizes the net dollar effect of the restatements due to errors or fraud on the reported net income (loss) of all 43 companies’ previously issued annual financial statements for the fiscal years, calendar years, or both ended from 1997 through 2002. The misstatement rates associated with these 43 companies’ previously issued statements of net income (loss), which ranged from a 6.7 percent overstatement of net income (loss) for 2000 to a 37.0 percent understatement of net income (loss) for 2001, would clearly be considered material enough to have affected the fair presentation of the results of operations included in these 43 companies’ financial statements. Accordingly, it is probable that the judgment of a reasonable person relying on the information included in these companies’ previously issued financial statements would have been changed or influenced by the inclusion of omitted information or correction of misstated items due to errors or fraud. Italy has required mandatory audit firm rotation of listed companies since 1975 in which the audit engagement may be retendered (recompeting for providing audit services) every 3 years and the same public accounting firm may serve as the auditor of record for a maximum of 9 years. In addition, there is a minimum time lag of 3 years before the predecessor auditor can return. The mandatory audit firm rotation requirement was intended to safeguard the independence of public accounting firms. In a meeting with IOSCO Standing Committee one member, the Italian representative from Commissione Nazionale per le Societa e la Borsa (CONSOB), the Italian securities regulator, indicated that Italy’s experience with mandatory audit firm rotation has been a good one, noting that mandatory audit firm rotation gives the appearance of independence, which is considered very important to maintaining investor confidence. However, it was also noted that there have been negative impacts, when after 3 years, there is fee pressure by the listed company on the audit firm that contributes to reduced audit fees. In responding to our survey, CONSOB’s representative indicated that there has been a progressive reduction in audit fees, which has given rise to concern over audit firms’ ability to maintain adequate levels of audit services and quality control. Research in Italy concludes that mandatory audit firm rotation carries significant threats to audit quality from competitive pressures. However, the CONSOB raised concerns about the study’s methodology, accuracy, data used, and appropriateness of the conclusions. Our review of the executive summary of the study also identified potential limitations on the reliability of data used and methodological concerns that created uncertainties about the study’s conclusions. Italy has also considered partner rotation; however, because Italy is currently considering reducing the maximum auditor tenure from 9 years to 6 years, partner rotation has not been given further consideration. Brazil enacted a mandatory audit firm rotation requirement in May 1999 with a 5-year maximum term and minimum time lag of 3 years before the predecessor auditor of record can return. The Comissao de Valores Mobiliarios (CVM), which is the Brazilian Securities Commission, indicated that the primary reason mandatory audit firm rotation was enacted was to strengthen audit supervision following accounting fraud at two banks (Banco Economico and Banco Nacional). Brazil does not have a partner rotation requirement, as the CVM believes that the requirement of rotating audit firms is stronger than changing partners within firms. However, as a component of its mandatory audit firm rotation requirement, Brazil prohibits an individual auditor who changes audit firms to audit the same corporations previously audited. Starting in March 2002, the Monetary Authority of Singapore stipulated that banks incorporated in Singapore should not appoint the same public accounting firm for more than 5 consecutive financial years. However, this requirement does not apply to foreign banks operating in Singapore. Banks incorporated in Singapore that have had the same public accounting firm for more than 5 years have until 2006 to change their audit firms. While a “time out” period is not stipulated, banks incorporated in Singapore shall not, except with the prior written approval of the Monetary Authority of Singapore, appoint the same audit firm for more than 5 consecutive years. In addition, listed companies are required under the Listing Rules of the Singapore Exchange to rotate audit partners-in-charge every 5 years. The primary reason Singapore instituted mandatory audit firm rotation for local banks was to promote the independence and effectiveness of external audits. In addition, mandatory audit firm rotation for local banks was cited by Singapore’s officials as a measure to help (1) safeguard against public accounting firms having an excessive focus on maintaining long-term commercial relationships with the banks they audit, which could make the firms too committed or beholden to the banks, (2) maintain the professionalism of audit firms—where with long-term relationships, audit firms run the risk of compromising their objectivity by identifying too closely with the banks’ practices and cultures, and (3) bring a fresh perspective to the audit process—where with long-term relationships, public accounting firms might become less alert to subtle but important changes in the bank’s circumstances. In Austria, Austrian Commercial Law will require mandatory audit firm rotation every 6 years to strengthen the quality of audits and to enhance auditor independence by limiting the time of doing business between the audited company and its auditor of record. The 6-year mandatory audit firm rotation requirement will become effective from the beginning of the year 2004, and there will be a minimum time lag of 1 year before the predecessor auditor of record can return. Austria does not have a partner rotation requirement; however, anyone who serves as the audit partner of a public company for 6 consecutive years will not be allowed to continue to serve in that capacity by becoming employed by the company’s successor auditor. In January 2003, the United Kingdom adopted the recommendations of the Co-ordinating Group on Audit and Accounting Issues (CGAA) to strengthen the audit partner rotation requirements by reducing the maximum period for rotation of the lead audit partner from 7 years to 5 years. The United Kingdom also adopted CGAA’s recommendation to limit the maximum period for rotation of the other key audit partners to 7 years. According to the CGAA report, the rotation of the audit engagement partner has been a requirement in the United Kingdom for many years, and the United Kingdom concluded that the requirements for the rotation of audit partners played an important role in upholding auditor independence. With respect to the issue of mandatory audit firm rotation, the United Kingdom supports CGAA’s recommendations, which concluded that the balance of advantage is against requiring the mandatory rotation of audit firms. The primary arguments against mandatory audit firm rotation, as cited in the CGAA report, include the possible negative effects on audit quality and effectiveness in the first years following a change, the substantial costs resulting from a requirement to switch auditors regularly, the lack of strong evidence of a positive impact on audit quality, the potential difficulty or impossibility of identifying a willing and able audit firm that can accept the audit without violating independence requirements in a concentrated listed company audit market, and competitive implications of such a requirement. However, CGAA also recommended that audit committees should consider changing their auditor of record when the audit tenure is from 15 years to 20 years. In France, audit partner rotation had been required since 1998 by the French Code of Ethics of the accounting profession. However, the requirement was not enforceable because the Code of Ethics had not specified any maximum length for mandatory rotation of audit partners. In August 2003, France promulgated the French Act on Strengthening of Financial Security, which makes it illegal for an audit partner to sign more than six annual audit reports. The main requirement that serves as an alternative to mandatory audit firm rotation is the French requirement of having two firms engaged in the audit of entities issuing consolidated financial statements, which has been a requirement since 1985 and has been reincluded in the August 2003 promulgation of the French Act on Strengthening of Financial Security. According to the Deputy Chief Accountant of the Commission des Operations de Bourse, mandatory audit firm rotation is not required in France primarily because of concern over the potential impairment of audit quality due to the new auditor’s lack of knowledge of the company’s operations. The Comision Nacional del Mercaso de Valores (CNMV)—the agency in charge of supervising and inspecting the Spanish stock markets and the activities of all the participants in those markets—indicated that from 1989 through 1995, Spain had a mandatory audit firm rotation requirement with a maximum audit term of 9 years, which included mandatory retendering every 3 years. The main objectives of this former requirement were to enhance auditors’ independence and promotion of fair competition. However, in 1995, the Spanish “Company Law” and the Spanish “Audit Law” were amended, effectively eliminating the mandatory audit firm rotation requirement, by allowing that “after the expiration of the initial period (minimum 3 years, maximum 9 years), the same auditor could be re-hired by the shareholders on an annual basis.” The Director of the CNMV indicated that the 9-year mandatory audit firm rotation requirement was abandoned since the main objective of increased competition among audit firms had been achieved and because of listed companies’ increased training costs incurred with a complete new team of auditors from a new public accounting firm. In November 2002, the Spanish “Audit Law” was amended to introduce a new requirement under which “all audit-engaged team” members (including audit partners, managers, supervisors, and junior staff) have to rotate every 7 years in certain types of companies, which include all listed companies, companies subject to public supervision, and companies with annual revenues over 30 million euros. In January 2003, the Royal Nederlands Instituut van Register Accountants (NIvRA) and Nederlandse Orde van Accountants-Administratieconsulenten (NOvAA) of the Netherlands, which are the bodies that represent the accounting profession in the Netherlands and are responsible for the qualifications and regulation of the accounting profession, adopted the recommendation of CGAA to strengthen the audit partner rotation requirements by reducing the maximum period for rotation of the engagement audit partner from 7 years to 5 years and to limit the maximum period for rotation of the other key audit partners to 7 years. The adoption of these measures by both NIvRA and NOvAA made these requirements a part of the code of conduct for auditors. A representative of the Netherlands Authority for the Financial Markets indicated that the Dutch government is in the process of promulgating these audit partner rotation regulations into law, where the requirement will only apply to public interest entities. In Japan, the Amended Certified Public Accountant Law was passed in May 2003, and beginning on April 1, 2004, audit partners and reviewing partners will be prohibited from being engaged in auditing the same listed company over a period of 7 consecutive years. Mandatory audit firm rotation has never been required in Japan, and public companies have never been encouraged to voluntarily pursue audit firm rotation. While Japan agreed with the December 2002 report issued by the Subcommittee on Regulations of Certified Public Accountants of the Financial System Council that mandatory audit firm rotation will need further consideration in the future, Japan’s securities regulator stated that mandatory audit firm rotation was not supported because of the concerns that it (1) may cause confusion given the concentration of audit business held by large public accounting firms, (2) is not required in other major countries other than Italy, (3) may significantly lower the quality of audits due to the need to arrange newly organized audits, and (4) would result in greater cost of implementation under the current concentration of audit business held by large public accounting firms. There are currently no Canadian requirements for mandatory audit firm rotation. However, mandatory audit firm rotation was included in banking legislation shortly after the 1923 failure of the Home Bank and up to the December 1991 revision of the Bank Act. The Bank Act required that two firms audit a chartered bank, but that the same two firms could not perform more than two consecutive audits. As a result, one of the two firms would have to rotate off the audit for a minimum of 2 years. According to Canadian officials, in practice this requirement was implemented in two different ways. Some banks appointed a panel of three audit firms with one of the three firms being a permanent auditor while the other two firms rotated every 2 years. Other banks appointed a panel of three audit firms and rotated among the three firms. Generally, the firm that was in its “off year” did not completely step away from the audit of the bank and would maintain at least a watch on developments in the bank’s business and financial reporting to ensure that it was knowledgeable enough to step back in when it rotated on again. One of the primary benefits of the system was believed to be that the use of two firms facilitated an independent review of the loan portfolio. This new perspective was generally considered to be a useful safeguard, and it was believed that the second firm would not bring with it an element of additional cost. The rotation element of the system was considered to bring with it an additional element of security by ensuring that issues were reviewed regularly by auditors with a fresh perspective, thus minimizing the risk of a problem festering because an issue was decided on and not reevaluated. Since the 1923 failure of the Home Bank, the dual auditor requirement with mandatory audit firm rotation for one of the two audit firms every 2 years was in place for over 60 years and was considered to be one of the key safeguards in the bank governance system. However, in 1985 two regional banks in the province of Alberta failed despite the existence of the dual auditor system. A subsequent government inquiry into the failures found that the Office of the Inspector General of Banks, now the Office of the Superintendent of Financial Institutions (OSFI), heavily relied on the external auditors and recommended the need for some direct examination by the supervisor of the quality of banks’ loan portfolios. Until 1991, only Canadian banks were required to rotate their auditor of record. In 1991, in line with a push for harmonized supervision, banking legislation was amended to reduce the requirement to one audit firm, and the mandatory audit firm rotation requirement was abandoned with the revision of the Bank Act. According to Canadian officials, one of the reasons for the abandonment was that many argued that the cost was not matched by the benefits and it was noted that Canada seemed to be largely alone in the world imposing such a system. There were few strong advocates for retaining the system, but questions were raised as to whether it was in fact a valuable element of protecting the safety and soundness of the banking system. Mandatory audit firm rotation is not currently being considered in Canada. Instead, as of July 2003, mandatory rotation of audit partners for all public companies was being considered by Canada’s securities regulator, supported by a new model of independent oversight and inspection of auditors of public companies. The accounting profession, through the Public Interest and Integrity Committee of the Canadian Institute of Chartered Accountants and in collaboration with provincial institutes, is considering developing an updated independence standard that considers certain requirements of the Sarbanes-Oxley Act for Canadian application to listed financial institutions regulated by OSFI. This independence standard will focus on mandatory rotation of the engagement partner rather than the firm auditing a listed enterprise regulated by OSFI, as well as other key members of the firm involved with the audit. According to Canadian officials, extending this requirement to nonlisted financial institutions is under consideration but the outcome will not be known for some time. In Germany, according to the German Commercial Code, a qualified auditor or certified accounting firm, beginning with annual financial statements issued after December 31, 2001, may not be an auditor of a stock corporation that has issued officially listed shares if it employs a certified accountant who has signed the certification concerning the examination of the annual financial statements or the consolidated financial statements of the corporation more than six times in the 10 years prior to the fiscal year to be examined. According to German officials, the principle of audit partner rotation has proven to be successful, and there are no plans to switch to a model based on mandatory audit firm rotation because the purpose of guaranteeing an independent audit of the financial statements of a company can be efficiently achieved by audit partner rotation. However, in order to improve investor protection and company integrity, Germany’s federal government published a 10-point paper, which included a planned amendment to the corresponding Commercial Code regulations to shorten the period of time after which an auditor of record must rotate to every 5 years and to include all responsible audit partners in the rotation requirement. In addition to those individuals named above, William E. Boutboul, Cheryl E. Clark, Robert W. Gramling, Wilfred B. Holloway, Michael C. Hrapsky, Catherine M. Hurley, Charles E. Norfleet, Judy K. Pagano, Sidney H. Schwartz, Jason O. Strange, Partricia A. Summers, and Walter K. Vance made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Following major failures in corporate financial reporting, the Sarbanes-Oxley Act of 2002 was enacted to protect investors through requirements intended to improve the accuracy and reliability of corporate disclosures and to restore investor confidence. The act included reforms intended to strengthen auditor independence and to improve audit quality. Mandatory audit firm rotation (setting a limit on the period of years a public accounting firm may audit a particular company's financial statements) was considered as a reform to enhance auditor independence and audit quality during the congressional hearings that preceded the act, but it was not included in the act. The Congress decided that mandatory audit firm rotation needed further study and required GAO to study the potential effects of requiring rotation of the public accounting firms that audit public companies registered with the Securities and Exchange Commission. The arguments for and against mandatory audit firm rotation concern whether the independence of a public accounting firm auditing a company's financial statements is adversely affected by a firm's long-term relationship with the client and the desire to retain the client. Concerns about the potential effects of mandatory audit firm rotation include whether its intended benefits would outweigh the costs and the loss of company-specific knowledge gained by an audit firm through years of experience auditing the client. In addition, questions exist about whether the Sarbanes-Oxley Act requirements for reform will accomplish the intended benefits of mandatory audit firm rotation. In surveys conducted as part of our study, GAO found that almost all of the largest public accounting firms and Fortune 1000 publicly traded companies believe that the costs of mandatory audit firm rotation are likely to exceed the benefits. Most believe that the current requirements for audit partner rotation, auditor independence, and other reforms, when fully implemented, will sufficiently achieve the intended benefits of mandatory audit firm rotation. Moreover, in interviews with other stakeholders, including institutional investors, stock market regulators, bankers, accountants, and consumer advocacy groups, GAO found the views of these stakeholders to be consistent with the overall views of those who responded to its surveys. GAO believes that mandatory audit firm rotation may not be the most efficient way to strengthen auditor independence and improve audit quality considering the additional financial costs and the loss of institutional knowledge of the public company's previous auditor of record, as well as the current reforms being implemented. The potential benefits of mandatory audit firm rotation are harder to predict and quantify, though GAO is fairly certain that there will be additional costs. Several years' experience with implementation of the Sarbanes-Oxley Act's reforms is needed, GAO believes, before the full effect of the act's requirements can be assessed. GAO therefore believes that the most prudent course of action at this time is for the Securities and Exchange Commission and the Public Company Accounting Oversight Board to monitor and evaluate the effectiveness of existing requirements for enhancing auditor independence and audit quality. GAO believes audit committees, with their increased responsibilities under the act, can also play an important role in ensuring auditor independence. To fulfill this role, audit committees must maintain independence and have adequate resources. Finally, for any system to function effectively, there must be incentives for parties to do the right thing, adequate transparency over what is being done, and appropriate accountability if the right things are not done.
DHS coordinates the federal government’s overall response to or recovery from terrorist attacks. The Centers for Disease Control and Prevention (CDC) within the U.S. Department of Health and Human Services is the primary agency for the public health response to a biological terrorism attack or naturally occurring outbreak. The FBI within DOJ is the primary agency for the criminal investigation of incidents of bioterrorism. In its recommended guidelines for laboratories engaged in microbial forensic analyses, the Scientific Working Group on Microbial Genetics and Forensics (SWGMGF) defines attribution as “the information obtained regarding the identification or source of a material to the degree that it can be ascertained.” As part of the effort to deter biological terrorism and strengthen the law enforcement response to such an act, Homeland Security Presidential Directive (HSPD) 10, “Biodefense for the 21st Century” established within DHS a dedicated central microbial forensic laboratory known as the NBFAC to provide bioforensics analysis of evidence associated with the event. This Presidential Directive established the NBFAC as “the lead federal facility to conduct and facilitate technical forensic analysis and interpretation of materials recovered from biocrime and bioterror investigations in support of the appropriate lead federal agency.” DHS Science and Technology (S&T) is to accelerate the delivery of enhanced technological capabilities to meet the requirements and fill capability gaps to support DHS agencies in accomplishing their mission. Pursuant to this mission the DHS Chemical and Biological Defense (CBD) Division seeks technologies to defend against a chemical and biological attack. In addition, the division is charged with pursuing research to improve response and restoration, conduct threat risk assessments, and invest in bioforensics R&D. In this regard, the Bioforensics R&D Program, according to DHS, supports NBFAC operational threat agent identification and characterization through investments in bioforensics research and next generation technologies to include molecular biology, genomic comparison techniques, genotyping assays and physical and chemical analysis of sample matrix to better understand the origin, evolutionary history, production method and dissemination mechanism associated with the malicious use of biological agents. Bioforensics has been defined as an interdisciplinary field of microbiology devoted to the development, evaluation, validation, and application of methods to detect and fully characterize microbial samples containing a biological agent or its components for the purpose of making statistically meaningful comparative analyses. Attributing something to a perpetrator requires different types of information and analysis—both traditional and bioforensics. Information produced by forensic examination can result in an investigative lead or provide support for the investigation. Bioforensics capabilities used to provide analyses of evidence may show how, when, and where microorganisms were grown and potential methods for dissemination, which assists attribution. Bioforensics evidence could include the agent that was released, toxins, nucleic acids, and protein signatures. It could also include contaminants, additives, and evidence of preparation methods. Traditional evidence could include fingerprints, hair, fibers, documents, photos, firearms, and body fluids. In a bioforensics case, the intent would likely be to gather sufficient information to allow a comparison of an evidentiary sample with a known reference sample to assist in supporting source attribution. Evidence from a bioforensics investigation must also meet the scientific community’s standards for evidence as well as a criminal court’s standards for legal admissibility. DHS has developed strategic plans and goals related to bioforensics attribution and identified some key bioforensics capability needs. However, according to DHS officials, DHS did not perform a bioforensics capability gap analysis, but, rather, used an informal approach to identify capability needs and gaps. DHS officials stated that they did not document the process DHS used or the results of its informal approach. Further, DHS officials told us that there is not a complete list of the gaps identified using the informal approach. Finally, although they indicated that DHS had focused resources toward addressing those gaps, they could not provide documentation of the bioforensics capability requirements and other relevant information to support the capability gap identification and resource allocation decisions that were made. In the absence of documentation of the processes, discussions, analyses, decisions, or any other activities performed to identify and prioritize bioforensics gaps, DHS’s rationale for the identification and prioritization of needs and gaps on which to focus its resources is unclear. According to DHS officials, DHS relies on the DHS S&T-managed NBFAC and bioforensics R&D programs to identify bioforensics capability needs and gaps. However, DHS does not have a complete list of its bioforensics capability gaps because it has not performed a bioforensics capability gap analysis. According to the DHS Systems Engineering Life Cycle guide, a gap analysis is a best practice that is essential to understanding whether capabilities exist that can meet requirements, or if they must be developed. In addition, some DHS officials told us that performing a capability gap analysis is a best practice that DHS programs should follow, even in the absence of DHS guidance to do so. In interviews and written responses, DHS officials described generally how DHS identified and documented capability needs and gaps. They told us that they identify priorities each fiscal year, develop projects to meet these priorities, and develop the NBFAC Annual Plan to address these priorities. However, they told us that there is no documentation of the process or results of the informal approach they used to identify bioforensics capability needs and gaps. According to federal standards for internal control, documentation is necessary for the effective design, implementation, and operation of an entity’s internal control system. Lacking documentation of the processes, discussions, analyses, decisions, or any other activities performed to identify and prioritize capability gaps, it is unclear what DHS’s rationale was for the identification and prioritization of bioforensics capability needs and gaps. Identifying and prioritizing capability gaps enables the proper allocation of resources to the highest priority needs. Thus, without a capability gap analysis, DHS may not have identified and prioritized all capability needs and gaps, and so may not be allocating resources to address the most significant gaps to meet its mission needs. DHS officials told us that no complete list of bioforensics capability gaps has been created since 2010. However, they told us that DHS had developed a document from 2013-2014—the Bioforensics Roadmap (Roadmap)—as a means to identify and achieve consensus from stakeholders on the key bioforensics capability needs on which to focus DHS resources. DHS officials said that the Roadmap lays out the Bioforensics R&D Program execution and also lists the key needs on which DHS has focused, or will focus, resources, along with the associated programs to address the needs. DHS developed strategic plans for its National Biodefense Analysis and Countermeasures Center (NBACC) in 2012 and Chemical and Biological Defense in 2013. It documented NBFAC strategic goals in 2013. These documents include strategic objectives related to bioforensics that could be used to guide a capability gap analysis. A former DHS official who had participated in DHS’s process for the identification of bioforensics gaps told us that the process was informal and also that there is no documentation of the process. This official summarized it as generating a list of topics and issuing BAAs to address them. Other DHS officials confirmed that the process was informal and that there was no documentation of the results. They told us that they are unaware of the details of the processes and activities performed to identify capability needs and gaps. However, they did describe generally the informal process that DHS used. They indicated that the process included working with key interagency partners and other stakeholders—such as the FBI—and engaging in discussions, exchanging emails, and holding periodic meetings. DHS officials also stated that they met informally as needed with the FBI and some intelligence agencies to discuss needs and gaps. These officials explained that the discussions with DHS’ interagency partners were part of a larger process to develop and manage NBFAC and bioforensics R&D programs. They said that DHS coordinated with the FBI and the Intelligence Community to focus these programs’ activities to meet the needs of these end users. Further, they said that the Roadmap was vetted by other agencies and researchers. In addition, according to FBI officials, the FBI conducted assessments of its capabilities by working with the DHS S&T and providing direction to DHS about its capability needs. FBI officials stated that sometimes the FBI does not know there is a bioforensics capability gap until it encounters one during an investigation. Independent assessments of DHS’s S&T Bioforensic R&D program have raised similar concerns about how DHS has identified and prioritized bioforensics capability gaps. For example, external assessments of the CBD portfolio from 2012 and 2014 found a lack of clarity about how the Bioforensics R&D program identified and prioritized capability gaps and why some projects were chosen. The reviewers recommended that the program manager describe the program’s basis for identifying capability gaps and selecting projects in future reviews. Specifically, a November 2014 review acknowledged the need for enhancing bioforensics capabilities but questioned the lack of information on how gaps in capability or knowledge guiding R&D investments were identified and prioritized. A 2012 review stated that it was unclear why some research studies were chosen over others, as well as how the selection of projects was linked to, or justified against, the risk assessment. Through the Roadmap, key bioforensics efforts have been identified, which DHS officials characterized as gaps. The efforts listed in the Roadmap include (1) operational infrastructure, (2) sample collection and preservation, (3) sample extraction, (4) identification and characterization, and (5) data analysis and integration. The Roadmap also includes existing and future CBD and other agency programs, as well as commercial development, linking them to the particular capability gap they address. The Identification and Characterization effort in the Roadmap includes developing capabilities to characterize unique, novel, and engineered agents; characterize unknowns (emerging or synthetic organisms); identify and characterize toxins, such as ricin; and quantify and communicate uncertainty, which is of particular significance when using metagenomics and proteomics capabilities. A DHS official and FBI officials also told us that other items they considered to be gaps include the difficulties in interpreting metagenomics data, limited sequences for select organisms in its reference database, and the need for a greater ability to examine proteins. However, the Roadmap provides no additional details about the bioforensics capability gaps other than the projected timeframe of 2014 to 2020 for completion of the agency programs. Figure 1 shows the Roadmap. According to DHS officials, to bridge the broad gap areas set out in the Roadmap would require resources far beyond those available to DHS. In addition to DHS and the FBI, other organizations, such as the NRC of the NAS and the NSTC of the Office of Science and Technology Policy (OSTP), have been involved in identifying bioforensics capability needs. The NRC Committee on Science Needs for Microbial Forensics was an international group of experts that identified scientific challenges that must be met to improve the capability of bioforensics to investigate suspected outbreaks and to provide evidence of sufficient quality to support responses, legal proceedings, and the development of government policies. Similarly, OSTP’s National Research and Development Strategy for Microbial Forensics was established to guide and focus U.S. Government research efforts to advance the discipline of bioforensics. With the assistance of the NAS, we convened our own meeting of experts in April 2016 to review and update capability needs that the NRC and OSTP identified and to identify additional needs that might be useful for DHS and the FBI to consider when they identify their capability needs as part of a bioforensics capability gap analysis. Some of the experts provided alternative views about certain aspects of the identified capability needs. While some of the bioforensics capability needs identified overlap with efforts listed in the DHS Roadmap, they were not formulated specifically considering DHS requirements so may not be relevant to DHS. However, we believe that this information could help inform the DHS’s and FBI’s efforts to identify capability needs and prioritize gaps. Starting with the capability needs identified by the NAS and OSTP, the experts that participated in the GAO meeting identified and generally agreed upon the capability needs listed in Table 1. Some of the needs the experts confirmed as still relevant were similar to those identified by the DHS and the FBI, and some were different. For example, like DHS and the FBI the experts agreed that an ability to characterize genetically engineered agents was needed, but they also suggested that evaluating existing protocols, such as those for DNA sequencing, to determine whether they were validated, was needed. The identified needs in table 1 can generally be grouped into three broad areas: (1) science, (2) technology and methods, and (3) bioinformatics and data. There are six needed capabilities within the science area; five within technology and methods; and three within the bioinformatics and data areas. While the majority of the experts agreed generally with the bioforensics capability needs in the three broad areas listed in table 1, some experts also had alternative views about some of the needs. For example, some experts thought that some of the needs should have a different focus or should be given a lower priority than others. In addition, some experts suggested that because it may be impossible to characterize all microbes, the first science capability need—the identification, monitoring, and characterization of microbial species—should instead focus on (1) developing a dynamic process and infrastructure for rapid collection and typing when an event occurs or (2) using a species-agnostic approach to identify both natural and synthetic microbes, such as focusing on genetic mechanisms rather than organisms. Additionally, some experts stated that limited emphasis should be placed on the third science need—developing methods to distinguish among natural, accidental, and deliberate outbreaks. They indicated that other investigatory data would be available that would be better suited for making this determination. Instead, they said the focus should be on identifying introductions of additional virulence or genetic elements into an organism and determining other elements that suggest that somebody has modified the organism. There was also disagreement on which microbes DHS and the FBI should focus. Some experts stated that, because new pathogens are difficult to create, the greater concern is naturally-occurring or modified microbes. Further, they also said that distinguishing among existing organisms already presents a difficult enough challenge. Finally, some experts said that the focus should be on microbes in laboratories because they are the most relevant to bioforensics and are not typically studied by the larger scientific community. The experts disagreed on whether the sixth science capability need— metagenomics research—was important for bioforensics. Some stated that metagenomics is worth exploring as a future capability but there are easier problems that need to be solved. Others said that a metagenomics capability is not necessary for analyzing simple samples but it might be useful for analyzing complex samples. In addition, one expert questioned the fifth technology and methods need—the need for nongenetic orthogonal methods—indicating that it is not a requirement in court to have two different methods to determine a result. For some of the bioforensics capability needs, experts indicated that other groups would develop the capabilities so that DHS or the FBI would not need to invest in them. For example, some experts said that the FBI should not focus its effort on the second science need—researching the mechanisms of pathogenicity—because it is unlikely to be closed quickly, and other groups are already addressing it. Regarding the second technology and methods need—adapting assay and sequencing technologies—some experts indicated that the commercial market will drive the development of improved sequencing technologies. Similarly, some experts said that agencies, such as the FBI and CDC, are working to address the first bioinformatics and capability need—the creation of data repositories and reference collections for pathogens and other microorganisms. The specific bioforensics casework requirements that formed the basis for the DHS’s efforts were not known to the experts that participated in the meeting and, consequently, the list of capability needs cannot be directly compared to the efforts in the Roadmap. These capability needs, along with the alternative views presented, could help inform the DHS’ and FBI’s efforts to identify and prioritize bioforensics capability gaps. These agencies could consider this information as part of any capability gap analysis. DHS and the FBI have taken actions to enhance some bioforensics capabilities but face numerous challenges before they can achieve the desired enhancements. Actions include not only the concrete steps that DHS has taken to enhance its capabilities, such as funding R&D activities, but also key strategic decisions underlying those actions. In this context, DHS actions include (1) developing methods-based capabilities to provide a broader bioforensics capability; (2) funding R&D activities to enhance its capabilities; (3) developing capabilities for short-term casework needs; (4) establishing an in-house reference database; and (5) developing capabilities for characterizing genetically engineered and unique, novel, or unknown (emerging or synthetic) agents. However, to achieve the capability enhancements they are pursuing, DHS and the FBI must overcome numerous challenges. These include (1) achieving the ability to interpret and communicate results from the bioforensics capabilities with a statistical confidence, (2) developing statistical frameworks, quantitative measures, and quality reference collections, (3) ensuring that its Bioforensic Repository Collection (BRC) contains quality data and appropriate agent strains, and (4) determining future casework needs relative to views of the evolving threat landscape. In addition, experts at our meeting and those we interviewed identified challenges regarding reference databases, the use of statistical frameworks, and the communication of results. DHS has taken several actions to enhance some of NBFAC’s bioforensics capabilities for use on FBI casework. For example, we found that since 2010, DHS, with FBI input, made a strategic decision to focus on the development of methods-based capabilities rather than agent- based capabilities for identifying and characterizing biological agents. This strategy is reflected in the 2012 NBACC strategic plan and its goals for NBFAC. Methods-based capabilities, according to DHS’s written responses to our questions, include genomics (whole genome sequencing and bioinformatics analysis), and analytical chemistry (mass spectrometry and scanning and transmission electron microscopy). In addition, DHS will maintain and enhance its agent-based capabilities in the interim—which include molecular biology, virology, bacteriology and toxinology—some of which will always be necessary for certain types of casework. Both types of capabilities will reside at NBFAC. The FBI agrees that such enhanced capabilities are needed. In responding to our questions, the FBI stated that DHS’s approach will provide “an adaptive and agile capability to characterize unique, novel, engineered or emerging biological agents.” While agreeing with the need to develop methods-based capabilities, however, the FBI also acknowledged in its responses that some agent-specific capabilities will always be needed for its investigations. Methods-based capabilities, according to DHS’s written responses to our questions, can potentially provide NBFAC with a broader bioforensics capability. For example, DHS stated that genomic analysis can use unique features as signatures to differentiate a particular isolate from others. DHS also responded that such features could include single nucleotide polymorphisms (SNP), rare variants, and epigenetic variation. Further, DHS stated that genomics-based characterization—including the ability to characterize background nucleic acids that may be derived from the environment in which the sample originated—represents a unique investigative signature that agent-based bioforensics procedures would miss. According to experts at our meeting, signatures range from anything that aids an investigation, to genetic signatures, syndromic signatures, metadata, and proteins, as well as other molecular signatures. DHS officials told us that the use of methods-based approaches, such as genomics, have dramatically reduced investigation timeframes. They said that DHS can now detect and sequence not only select agents, but a number of other biological agents (even bioengineered ones) in a fraction of the time. What took years to complete in the 2001 Amerithrax case can now happen much more quickly with such improvements. An FBI official further elaborated, stating that improvements in techniques and technologies have led to potential increases in obtainable information and significant reductions in analysis times supporting bioterrorism investigations. In contrast, based on our review, prior to 2010, we found that NBFAC’s bioforensics capabilities focused on identifying biological agents on the CDC and USDA select agent lists. In this regard, in its written responses to our questions, DHS stated that it has established International Organization for Standardisation (ISO) 17025 accredited, complementary assays such as culture, real-time polymerase chain reaction (PCR), and immunoassays for most traditional bacterial, viral, and toxin agents. However, unlike methods-based capabilities, DHS stated that these require prior knowledge of an organism and the maintenance of agent‐ specific reagents. Further, agent-based capabilities do not cover a wide array of potential threats, including genetically modified or de novo agents, and have not been developed for known human pathogens, especially those that may not be cultivable. Thus, according to DHS responses, a methods-based approach will ultimately provide NBFAC with capabilities not only for analyzing challenging samples but also with a broader, more comprehensive bioforensics capability for characterizing unique, engineered, or emerging biological agents. Table 2 shows these two types of capabilities and the types of analyses they could be used to perform on evidentiary samples. For example, bacteriology involves culturing and deriving phenotypic information on an agent to identify and characterize it. However, toxinology, another agent-based capability, could involve identifying and characterizing protein toxins such as ricin using an immunoassay—such as ELISA, an enzyme-linked immunoabsorbant assay. In addition, analytical chemistry, a methods-based capability, could be used to characterize toxins by using mass spectrometry, which also supports proteomics analysis. That is, both types of capability could be involved in analyzing toxins. According to DHS responses to our questions, use of each of its mass spectrometry methods function independently and provide complementary information to confirm results derived from immunoassays and biological activity assays for protein toxins. Electron microscopy—a methods-based capability—involves nonbiological analysis of evidence samples. For example, it could provide elemental analysis of an agent. Transmission electron microscopy (TEM) and Scanning Electron Microscopy (SEM) can be used to provide images of nonspore forming bacteria and viruses, and castor bean products, among others. DHS has solicited and funded R&D projects to enhance NBFAC’s bioforensics capabilities—completion of which DHS anticipates will extend beyond 2025. The R&D is related to areas in which DHS has stated there are capability gaps, or it is linked to some of the program responses listed in the 2014 Roadmap. It also reflects DHS’s shift toward methods-based approaches, such as genomics and proteomics. Using a BAA mechanism, DHS solicited research proposals for R&D related to enhancing its bioforensics capabilities. To more clearly describe the type of research sought, the BAAs specified not only broad topic areas as well as technical topic areas—more specific, technical details about the type of research being solicited. Subsequently, DHS awarded about 36 contracts for solutions or products addressing areas related to bioforensics. According to the FBI’s response to our questions, it is involved in the process from start to finish, including assisting in drafting the BAA, the proposal evaluation and selection process, and meeting with DHS and the contractors throughout the course of the contract. Before being used in FBI casework, SOPs and other deliverables from the funded research would have to make a transition to NBFAC operations and potentially be accredited under ISO 17025, as appropriate. In responding to our questions, DHS stated that both ISO 17025 accreditation and deliverables such as publications provide the data necessary to support the general acceptance of a method within the scientific community and to meet the Daubert standards for admissibility of analysis for a federal prosecution. DHS officials explained that, to the extent possible, they publish their research results so that NBFAC’s bioforensics techniques can be peer-reviewed, validated, and supported in court. Figure 2 shows the broad topic areas and the years in which research was solicited through the BAAs. Broad and technical topic areas: Based on our review, both the broad topic areas in figure 2 and the underlying technical topic areas in the BAAs reflect the long-term, methods-based enhancements, and also the enhancements to toxin analysis capabilities for the FBI’s current casework needs sought by DHS. For example, the following technical topics were included as part of the 2015 solicitation for Bioforensics research: products to identify select agents including toxins, with high next generation and novel technologies to characterize biological threat agents for source attribution, bacterial populations of select agents with critical knowledge gaps, including C. botulinum and B. anthracis (North Africa, Middle East), high-confidence methods for metagenomics analysis of complex biologicals in complex samples to support whole genome sequencing, and informatics and statistical tools. DHS-funded R&D: In line with the broad topic areas indicated in the figure—bacterial population genetics, sequence-based approach to bioforensics, and bioforensics research—we found that DHS-funded R&D contracts include the following areas: population genetics for forensics, biological toxin identification, metagenomics sequence data, statistical confidence in evidentiary materials based on bacterial forensic proteomics of virus production, Bayesian taxonomic assignment for next-generation sequencing, and sequencing-based bioforensics analyses. R&D supports DHS’s efforts to develop methods-based capabilities, including sequencing methods to enable genomic analysis of any organism in any sample, as well as bioinformatics methods for de novo assembly, metagenomic classification, comparative analysis, identification of genetic engineering signatures, and the inference of biological function. For example, according to DHS’s responses to our questions, Population genetics: research into population genetics, a 5-year timeline project, is published in the open scientific literature, in the sequence data in GeneBank; NBFAC used the information to better understand the genetic diversity of the organisms studied in that project. According to documentation we reviewed, results from such studies will refine understanding of the population genetics of certain select agents to better calculate match statistics in a forensic setting. Biological toxin characterization: Regarding toxins, DHS has funded contracts to develop SOPs for protein toxin characterization using mass spectrometry, among other projects. Metagenomics sequencing: DHS research into genetic issues is ongoing. DHS is seeking a means for future use of metagenomics analyses on complex samples. Regarding metagenomics, DHS plans research into high-confidence metagenomics analysis of complex biological samples, as well as developing statistical models and software to identify the organisms in a complex sample and estimate their relative abundance, including developing an existing system for probabilistic reconstruction of the taxonomic structure present in a metagenomic sample. Bioforensic proteomics: DHS has also funded research on proteomics—including proteomics of virus production—and analysis of proteins and metabolites of unknown samples to complement genetic characterization. Ricin Ricin, or Ricinus communis, is one of the most poisonous naturally occurring substances. Ricin is derived from the beans of the castor plant. Ricin is toxic to cells and damages all human organs. It is considered a select agent (toxin). No antidote is available. DHS and the FBI are Enhancing NBFAC’s Biological Toxin Analysis Capabilities for Current Bioforensics Casework: Based on our review and DHS’s responses, DHS’s primary focus is on bioforensics capabilities in the short term to address the FBI’s current casework needs. Such casework has involved the FBI’s investigation of multiple biocrimes involving the use of ricin, including a case in 2013 in which ricin was sent to the U.S. President. NBFAC analyzed some of the samples in that case, according to the FBI’s responses to our questions. FBI casework carried out by NBFAC involves the FBI’s transporting evidentiary samples to NBFAC, which (1) develops a sample analysis plan (which could involve traditional as well as bioforensics analyses) for FBI approval, (2) conducts analyses, and (3) reports the results to law enforcement, which uses them to inform the bioforensics investigation. Based on our review, for prosecution in a case involving ricin, the scientific evidence may need to establish that the toxin is present in an evidentiary sample. We found that a combination of analytical capabilities may be used to confirm this, with each detecting a specific target. For example, to confirm the presence of ricin in a sample, antibody tests, such as ELISA, and mass spectrometry can be used for detecting the presence of ricin, and examining the protein, respectively. Added to them can be cell-free translation assays, another type of antibody test, which also detects ricin. We also found that NBFAC’s capabilities for analyzing ricin toxins initially included all the independent capabilities above, with the exception of an accredited mass spectrometry capability for characterizing ricin and other toxins. Based on our review, we found that NBFAC had contracted with a laboratory to examine protein toxins by mass spectrometry when it did not yet have that capability. Doing so, according to their responses, resulted in a 2—3 day delay, and the laboratory was also not accredited under ISO 17025. As a result, the FBI further responded that it requested that DHS develop an in-house ISO 17025 accredited toxin analysis capability at NBFAC. The FBI provided the equipment and funding for this transition to NBFAC. Enhancing Genomics and Proteomics Is a Long-Term Effort: According to a DHS official, DHS is continuing to enhance both genomics and proteomics capabilities, which is expected to provide a complementary capability that will link proteomic analysis to metagenomics analysis of complex samples, thereby providing additional information about an agent. Further, according to this official, genomics and mass spectrometry will support developing metagenomics and proteomics. Based on our review, some of the ways in which metagenomics capabilities may be used are as follows: Metagenomics: It allows sampling of the genomes of microbes without culturing them; rather, the DNA is directly isolated from the sample before genome sequencing. A DHS official stated that DHS plans to provide comprehensive metagenomics analysis of complex evidentiary samples. These types of samples may contain both microbial and human DNA as well as mixtures that derive from possible processing steps (growth media, etc.), which could provide links to a possible source. In the context of metagenomics, according to a DHS official, “complex samples may be from any environment and can be mixtures of many types of organisms. The simplest of metagenomics samples may be viruses in the tissue culture which contains the genomes of two organisms, the cell line, and the virus. The most complex metagenomic samples are from soil samples. Soil samples may contain an organism of interest, at low concentration, but also will likely have DNA and other biological materials from things such as plants, animals, fungi, bacteria and viruses. The ability for the forensic laboratory to collect metagenomic data and analyze it relies on the development of tools for metagenomics.” In this regard, according to an expert who we contacted, metagenomics or the evaluation of environmental samples for genetic information is a task that may not give DHS the returns from its investment. It is a very time consuming technique and should probably be left to academia and/or industry. Once these methods are developed, DHS would be able to apply the most applicable techniques, according to this expert. Figure 3 illustrates the possible composition of a complex sample. Metagenomics analysis of a complex sample could reveal the presence of DNA and other types of material at different percentages, including eukaryotic nucleic acids. However, because evaluating metagenomic sequence data is based on relative abundances, large amounts of data may be generated. Interpreting these data and their meaning in terms of an agent’s source will be necessary. Based on our review, some of the ways in which proteomics capabilities may be used are as follows: Proteomics: Proteomics is the study of proteomes. A proteome is a set of proteins produced in an organism, system, or biological context. In response to our questions, DHS advised us that it plans to establish a proteomics capability for NBFAC sometime in the future. Mass spectrometry is being used for proteomics analysis and is able to provide information indicative of a particular protein. While proteomics does not replace genomic analysis, it may provide additional information if the microbial DNA is too damaged for analysis, according to an expert who attended our April meeting. In addition, according to this expert, there are differences in microbes between those naturally occurring and those grown in laboratories, including differences in growth patterns. Proteins express themselves based on different food sources. Consequently, according to this expert, analyzing the microbes to determine the growth medium used could be useful for bioforensics. Further, protein profiles have the potential to provide information on the environment that the microorganism has experienced. For example, cultivation might provide information about the skills of the people who grew the organisms. Thus, proteomics provides a different level of discrimination from that of genomics. Finally, according to another expert who we contacted, proteomics analysis should become a valuable tool for bioforensics and may rival genetic information when methods have matured. Based on DHS’s responses to our questions, achieving a genomics and proteomics capability will also require (1) a bioinformatics and a statistical framework for inference and analysis of unknowns in microbial isolates, and (2) significantly expanded genome databases and an understanding of the underlying determinants of various pathogenic traits. DHS responded that both of these are currently funding priorities. In this regard, DHS stated that NBFAC continues to expand a major genomics capability that includes multiple, complementary sequencing platforms and advanced bioinformatics within high-performance computing environment. In its responses, DHS termed this approach as “agent- agnostic” as the analytical procedures require no knowledge of which agent might be present in a sample. However, while DHS also stated that it provides confidence estimates for aspects of its genome sequencing and continues its incremental development of its genomics capability, it also acknowledges the need for statistical frameworks. For example, it stated that “the issues regarding statistical uncertainty require the development of statistical frameworks to ensure that attribution signatures are clearly defined and understood; that there is standardization, validation and verification of the signatures; that relevant source populations are fully characterized and understood; the limitations of measurement tools are known, and the statistical methods being used are appropriate for the signatures data.” Finally, DHS stated in its response to our questions, that understanding and communicating of the uncertainty is of particular significance when using metagenomics and proteomics on complex sample types. Other experts we interviewed also agreed that there is a need for more flexible bioforensics capabilities. For example, an expert from our meeting stated that currently, characterizing an agent is achieved by using sequence data. Learning what can be exploited for this purpose is in its early stages. In addition, a U.K. official we interviewed said that while a priority list of organisms will still be needed for responding to emerging pathogens and diseases or synthetic biological agents, now a more agnostic or “horizon spanning approach” will be used. Nevertheless, not all experts were in agreement that DHS should pursue metagenomics for bioforensics purposes, at least not in the short-term. For example, in a 2016 independent assessment of DHS Bioforensics R&D program, reviewers recommended a “more measured investment” in metagenomics and expressed doubt that an operational metagenomics capability was likely to be available at NBFAC in 5 years. Instead, they suggested that DHS take a more proactive investment stance by following developments in the field that were occurring elsewhere. Completion of Capability Enhancements: Based on our review of the roadmaps that DHS provided to us regarding the bioforensics enhancements, DHS estimates that most of the R&D tasks associated with capability enhancements will be completed by 2025 or later, with some exceptions. For example, in July 2016, a DHS official indicated that DHS’s new mass spectrometry casework capability may be available after it has been accredited under ISO 17025 over the next 12 months (in 2017). In addition, DHS has a 3-5 year focus for developing metagenomics so that it can be used on casework. Completion of more advanced enhancements will likely extend beyond 2025, according to DHS responses to our questions, such as for genomics and proteomics, areas that are evolving. Activities include establishing (1) integrated processes within metagenomics analyses to facilitate high resolution characterization of all agents and nucleic acids in complex samples; (2) a bioinformatic and statistical framework for phenotypic inference and analysis of “unknown unknown” microbial isolates; and (3) increased capabilities to support large-scale proteomic analysis integrated with inferential analyses. See appendix II for more details on the BAAs. DHS is also taking actions to establish an in-house reference collection of biological materials—the NBFAC BRC—which will provide materials for comparative forensic analyses, assay development and evaluation, and proficiency testing. According to DHS responses to our questions, the BRC is a long-term storage site for materials acquired from other institutions (government, academia, commercial and international sources) and NBACC projects. Housed at NBACC, it includes select and nonselect agent bacteria and viruses, toxins, and their near neighbors. The BRC supports characterization of bacterial and viral agents by determining phylogenetic relatedness of different bacterial and viral isolates and enabling isolate-level characterizations, which according to DHS, is important for isolates that have never been fully characterized or sequenced. DHS began obtaining a variety of biomaterials such as select agents and toxins through subcontracts with government agencies, which were stored in external laboratories. In fiscal year 2010, the new NBACC laboratory opened, after which the collection was consolidated within the biocontainment facilities at NBACC and became available for use as a reference material. DHS states that it is working with the FBI to expand the number of strains of interest in the BRC. In addition, DHS projects that develop information on biological organisms are published in the open, peer-reviewed literature; sequence data are published in GeneBank and are available to the larger community and NBFAC. DHS’s actions also include developing incrementally a capability for identifying and characterizing genetically engineered novel, and unknown (emerging or synthetic) agents that uses methods-based capabilities. In this regard, NBFAC has developed a genomics capability that DHS asserts can be used to infer genetic engineering from DNA sequencing and protein sequences. Genetic engineering involves inserting a foreign sequence of genetic codes into an existing sequence of genetic codes in a target organism with a view to altering some of its functions. DHS states it can identify genetic modifications by screening against genes of interest (for example, virulence factors or antibiotic resistance genes), comparing genome alignments, and comparing regions with unusual sequence composition to those typically found in nature. In the past, restriction enzymes have been used to cut DNA and insert specific genes from a different organism to produce a desired effect (for example, producing human insulin using bacterial cells), which results in “scarring” at the restriction sites. Genome characterization and analysis according to DHS’s and the FBI’s responses to our questions, respectively, would be able to detect such scarring. However, gene editing techniques are evolving and may be harder to detect. Clustered, regularly interspaced, short palindromic repeats (CRISPR) Cas-9—is now available to researchers. It engineers microbes by inserting genes, although unlike previous methods the restriction sites may not be evident, and the enzymes used will not cause scarring. Figure 4 is a simple illustration of genetic engineering using restriction enzymes. Experts from our meeting, as well as other experts we interviewed, indicated that identifying genetic engineering could be approached by determining an agents’ virulence and then using capabilities such as mass spectrometry to identify if elements exist that suggest modification. Nevertheless, it was thought that a genetically engineered agent would have some parts that remain unchanged, which would help to determine its characteristics. For example, according to an expert at our meeting, the focus should be on (1) identifying those introductions of additional virulence or genetic elements into an organism, which can be done fairly quickly, and then determining if there are other elements that suggest somebody has modified the organism and (2) using methods like mass spectrometry, microscopy, and other methods that can identify the means of production or culture and dissemination or delivery of the organism. In addition, according to a U.K. expert we interviewed, the core part of the genome in an engineered agent would not be changed, and it also must reproduce and metabolize. If it is an engineered virus, it would have some similarities to other viruses, such as in how it attaches itself to a cell to propagate its genome. So within its genome some signatures would be available for comparison. Further, this expert stated that even if the organism was a synthetic one and CRISPR Cas-9 had been used, he would still look to see whether any scarring was present (see figure 4). Regarding synthetic agents, DHS asserts that they can be analyzed similarly to those that are genetically engineered, with the addition of NBFAC’s inferential analysis capability, whose analysis will provide clues to the functionality of a synthetic agent. DHS is developing a capability that will allow NBFAC to characterize unique, novel agents, “unknowns” (emerging or synthetic organisms) and “unknown, unknowns” (de novo synthetic organisms). However, achieving this capability will also require a bioinformatics and a statistical framework for inference and analysis of unknowns in microbial isolates and expanded genome databases, according to DHS. DHS indicates that it is developing a “multi-layered inferential analysis capability” that would include establishing comparative methods for the analysis of any DNA or protein sequence to identify such things as peptides, restriction sites, and a statistical model that allows confidence estimates to be placed on these analyses. DHS faces numerous challenges as it attempts to enhance its bioforensics capabilities. Our review of agency documentation and related literature, and interviews with agency officials, scientists, and subject matter experts at our meeting and elsewhere, as well as our prior work, indicate that challenges must be overcome if DHS is to develop enhanced capabilities suitable for bioforensics. These include capabilities that not only can be relied on for identifying and characterizing known agents but also those that have been genetically engineered, or are unique, novel or unknown (emerging or synthetic). Challenges that DHS faces include (1) the ability to interpret and communicate results with defined statistical confidence, (2) obtaining access to quality references and databases for bioforensics analysis, and (3) the effect of the evolving threat landscape on future casework needs. Further, their results must be also able to stand up to court scrutiny. DHS plans to develop advanced metagenomics and proteomics capabilities. However, it is not clear to what extent or when DHS will be able address key challenges related to enhancing its bioforensics capabilities that include interpreting results from metagenomics and proteomic analyses, with a defined statistical confidence, according to both DHS and the FBI officials. Further, communicating the uncertainty associated with the results will be particularly important when using these capabilities on complex sample types. Without a defined level of statistical confidence, the probative value of inferences made from the results of such analyses may not be known. In general, as the level of statistical confidence increases in these inferences—signifying a higher degree of scientific certainty—the probative value of the inference also increases. In figure 5, we have extracted and expanded on one dimension of “the forensic continuum” that has been used to represent the evaluation and analysis of a bioforensics sample and its probative value. As indicated in the figure, probative value depends on confidence in analysis and the interpretation and meaning of evidence. Investigative leads or bioforensics data may rely on the use of bioinformatics and data and inferences using those data. In this regard, according to a DHS official, an issue DHS continues to struggle with is how to interpret metagenomics analysis—whether it is possible to define with certainty whether a piece of the genome of an agent is present— versus defining the error rates for each sequencing base call, which DHS can do. DHS solicitations for R&D reflect some of these challenges, including the following extract from a related 2012 BAA solicitation regarding interpreting the results of metagenomics analyses: “Currently, it is difficult to assign confidence to the results of metagenomic analyses. For example, in metagenomic sequencing, what do a small number of reads that match a particular organism say about the probability that the organism is actually present in the sample? New methods are needed to assess the likelihood that an organism is present in a metagenomic sample and to provide confidence intervals on abundance estimates. Bioforensics R&D is looking to invest in the development and application of mathematical models for (1) estimating the likelihood of a genome being present in a metagenomic sample, and (2) the most likely composition of a metagenomic sample including a list of genomes and their relative abundance. The system should go beyond metagenomic classification to provide a statistically supported estimate of sample composition that could be used in a biothreat agent detection context.” Based on our review, we found that analysis of metagenomics data sets will rely on advanced bioinformatics analyses that involve a large statistical component. However, forensic casework may involve mixtures, and separating these into individual components may be difficult—a problem that may also apply to metagenomics. In this regard, challenges involve the development and applications of appropriate bioinformatics and data that will provide the ability to not only to describe the relative abundance of sequence data but also to make inferences using those data to provide either an investigative lead or to support attribution. Efforts to achieve this are complex and will be conducted over multiple years. For example, according to NBACC’s 2015 annual plan, “a sequence-based, bioinformatics-driven genomics approach is a complex endeavor that requires incremental implementation of critical technologies over multiple years.” Regarding proteomics, challenges remain in interpreting data. For example, according to experts at our meeting, a quantitative measure for proteomics needs to be available so that an informed decision can be made. However, this is complicated by the lack of a framework for expressing confidence in a result. Further, related to data analysis and interpretation, for example, the potential for rapidly expanding protein databases to result in false matches exists and the lack of standardized approaches to proteomic data analysis is problematic. DHS must address several challenges related to its reference materials that could affect NBFAC’s comparative analysis of evidentiary samples. According to DHS’s responses to our questions, these include access to reference strains of interest, international agents, and ensuring the quality of the data in the BRC. During our review we found that in contrast to human DNA—a single species—the challenges for bioforensics involve a multitude of species. Further, the quality of the data entered into a particular database, including the metadata, and whether the database is kept up to date could affect analysis if NBFAC uses that database. In addition, ensuring that agents of interest are available for comparative analyses is necessary. Regarding the BRC specifically, not all strains are readily available and obtaining agents internationally raises issues, according to DHS responses. Further, not all researchers are willing to share their strains. As a result, DHS is working with the FBI to develop an acquisition and curation plan to expand the number of strains of interest in the BRC. A DHS official stated that replacing agent-specific assays with DNA sequencing methods will require DHS to have a comprehensive, sophisticated database, which it currently does not have. Therefore, ensuring the usefulness and quality of its reference collection and its ability to obtain the strains of interest will continue to be a challenge for DHS. Experts at our meeting and others we interviewed identified two key challenges associated with enhancing bioforensics capabilities: (1) accessing and maintaining quality data on global microbial species in databases and (2) implementing statistical frameworks and acceptable communications of statistical analyses in court. Reference databases and quality data: Experts and officials in both the United States and the United Kingdom whom we interviewed had differing opinions about the challenges associated with obtaining access to global microbial species and maintaining quality data for comparative analyses of samples. For example, some indicated the need to establish and maintain and a central database, whereas others considered it necessary only to have the ability to access one that is relevant when an incident occurs. Experts at our meeting also expressed reservations about whether a centralized system is the best solution. They stated that it is possible that a hybrid system in which each organization would own its own dataset but would allow it to be searched by others might be a possible solution. Alternatively, organizations could be required to send their data to a central database in addition to storing it locally. Questions have been raised about how reference data can be used effectively for bioforensics. For example, regarding the use of population genetics, it has been observed in the literature that a more useful database for each pathogen would consist of a detailed record of human and enzootic outbreaks noted through international outbreak surveillance systems, and “representative” genetic sequences from each outbreak. Another expert suggested that if necessary one could individually forage and collect organisms of interest in relevant areas or countries. Further, according to these experts, much is known about Bacillus anthracis, but other organisms like Burkholderia are much more challenging. To solve this problem would involve first going back to close the gaps in the reference databases and the population genetics. Another challenge involved concerns about managing the quality of the data entered into the database to ensure it meets quality standards. Regarding the quality of such references for bioforensics, standards are needed for data repositories and reference collections of pathogens and other microorganisms, according to the experts at our meeting. Also, because of the uncertainty about the reference data, the meeting’s experts stated that the raw data should be maintained for further analyses. In addition, according to these experts, questions about the meaning of the data for these applications and the confidence value of the data need to be resolved before focusing on them for bioforensics purposes. According to a U.K. official we interviewed, the level of uncertainty in matching microbes cannot be quantified, and attribution depends on a reference set, which is incomplete for microbes. It could be concluded that the microbe in question has the same DNA as that of a microbe in the reference database, but not with certainty that it would not match another microbe that is not in the reference database. Because of the limits of using one approach, it is important to also use traditional forensics to build an evidence base. In court, traditional forensics, in addition to expert testimony on bioforensics, would therefore be used for attribution. Statistical frameworks and communicating results: As we reported in 2014, a statistical framework allows for statistically meaningful comparative analyses; it is a set of concepts and organizing principles that support the compilation and presentation of a set of statistics. Experts at our meeting expressed the view that a gap that permeates science, capabilities, and bioinformatics is the lack of a formulation or framework for expressing confidence in genomics results as well as similar challenges with non-genetic results. This is especially true with mixed, metagenomics samples. Further, how to combine or communicate uncertainties and error rates associated with the analytical and collection processes needs greater clarity, according to the experts at our meeting. They stated that to have a more robust statistical foundation, it is critical to do enough experiments to assess the various contributions of these sources of variability. NBFAC is moving into the realm of metagenomics, which has problems with the statistical unknowns associated with it. Metagenomic samples may contain mixtures as we stated previously. In this regard, these experts stated that the problem of mixtures is an opportunity for statistical methods to improve the results for different kinds of evidence. For example, is the evidence confirmatory; is it consistent with what’s in the database, and what are some possible alternatives that could have given rise to the evidence? In addition, these experts stated that challenges in a bioforensics context include the need for a quantitative measure for genetics, proteomics, or other methods so that an informed decision can be made. Communicating results using statistical probabilities may not always be acceptable by courts, despite the need for statistical frameworks to assist in interpreting bioforensics analyses. Even if accepted, such statistical information may not be understood. This issue is important because statistics could play a large part in some types of analyses. For example, using statistics when communicating the results of human DNA analysis are generally accepted by courts in the United States. For interpretation of bioforensic results, according to the experts at our meeting, the question should be: What is the confidence you have achieved with the data or information that you have? How that confidence is communicated is important. It is forensic evidence, a piece of the puzzle. It adds value, but the confidence for it may be low, whereas the confidence for some other evidence may be high. The level of uncertainty for that result should also be indicated. Thus, bioforensic analyses of microbial DNA, and its associated statistical elements, may have to overcome many obstacles before they reach a similar level of acceptance in a U.S. federal court. Other experts have acknowledged such challenges. For example, regarding the U.S. legal system, issues may arise when new methods are applied for the first time in bioforensics that have not undergone the depth of scrutiny undergone by traditional forensic techniques. Also, reference databases used for comparisons take time to develop. Our interviews with U.K. scientists and government officials provided some insights into issues associated with human DNA analysis results, which are generally accepted by courts and that could have implications for bioforensics in both the United Kingdom and the United States. For example, regarding the communication of probability data, according to a U.K. official we interviewed, while academics say there is some set level of probability to achieve “beyond a reasonable doubt,” the court is concerned with the baseline probability. However, this official stated that human DNA is the only science in which the baseline probability data is considered incontrovertible. Further, he said that in almost every other science, the legal system would want assurances from expert witnesses regarding the analysis results—not the numerical scientific results themselves. In light of the discussion above, it is not clear how long it will take for the results of metagenomics and proteomics analyses to be acceptable to courts. Nevertheless, what is clear is that the ability to quantify statistical uncertainty will require the use of comprehensive databases that contain characteristics of signatures and information on the variations in the population of the agent in question. A long-term challenge facing NBFAC, according to DHS’s responses to our questions, is the increasingly complex biological threat landscape: New infectious disease agents emerge every year, and advances in genetic engineering and “do it yourself” biology methods make the nefarious use of enhanced and biological agents a possibility. DHS further responded that as a result NBFAC must regularly establish new methods and assays to support bioforensics casework that may involve future threats. Further, we found it is still challenging to distinguish between a natural and a deliberately released organism. However, according to the experts at our meeting, when using “natural,” “accidental” and “deliberate,” the issue could be more to do with the means by which an agent is used, or about the characteristics of the agent itself. Determining intent is likely to rely on information beyond the science alone. In addition, while epidemiologic tools determine whether something is unusual, what is also needed is a defined and validated tool that will determine whether a microbe is unusual, made by humans, cultured, or engineered. Although DHS is developing capabilities to detect manipulated agents, it faces several challenges related to the perceived potential for the creation of agents that could cause harm accidentally or intentionally. To identify and characterize novel synthetic agents, these challenges go beyond identifying changes in an agent’s genomics (such as its antibiotic resistance). DHS, in its responses to our questions, stated that identifying more complex traits is more difficult because of the current scientific understanding of how these processes work at the molecular level. DHS also responded that both genetic engineering and NBFAC’s ability to infer its intended effects require a deeper understanding of the physiology of the biological agent as well as its interaction with a human host. Concerns have also been raised about the potential for gain of function research to result in manipulation of microbial agents with the potential for causing harm. Such manipulations could involve, for example, agents or toxins in which harmful consequences have been enhanced, such as making them antibiotic resistant, more virulent, or more transmissible to humans. The use of CRISPR Cas-9 also raises other issues because it may be more difficult to detect than were previous gene editing approaches. However, not all agree that the risk of possible misuses of biology is significant. Regarding the use of genetically engineered agents to cause harm and the likelihood of this becoming a problem, we found differences in the views of the experts we spoke to both here and in the United Kingdom. Some said that it is difficult to create new pathogens so the use of naturally-occurring microbes is of the greatest concern. While acknowledging there are many technically possible misuses of biology, they concluded that it is far more likely that minor modifications would be made to existing organisms rather than the creation of new ones. According to an expert we contacted, developing a new microbe with novel pathogenic characteristics or antibiotic resistance significantly more difficult than introducing these characteristics by gene manipulation. Thus, one of the challenges DHS faces is to consider the risks in relation to not only the bioforensics capabilities it needs but also its strategy for addressing current and potential threats. DHS has identified some bioforensics capability gaps since 2010 using an informal, undocumented process but has not systematically identified the gaps or performed a bioforensics capability gap analysis. In the absence of a bioforensics gap analysis demonstrating the existence of gaps, it is difficult to determine whether DHS has identified all its capability needs and gaps. Identifying gaps and prioritizing bioforensics capability needs and gaps can help guide the proper allocation of resources to the highest priority needs. Therefore, without a capability gap analysis and documentation of the results of its process for identifying gaps, the rationale for DHS’s resource allocations and its plans for future enhancements to its existing capabilities are not clear. We recommend that the Secretary of Homeland Security—in consultation with the Federal Bureau of Investigation—conduct a formal bioforensics capability gap analysis to identify scientific and technical gaps and needs in bioforensics capabilities to help guide current and future bioforensics investments and update its analysis periodically. We provided a draft of this report for review and comment to DHS and the FBI. DHS provided written comments, which are reproduced in appendix IV. DHS concurred with our recommendation. The FBI did not provide comments. Neither DHS nor the FBI provided technical comments. In its response, DHS described actions it plans to take to address the recommendation. Specifically, according to DHS, S&T's Homeland Security Advanced Research and Projects Agency's Chemical and Biological Defense (CBD) Division has initiated a formal, well- documented capability analysis of its Bioforensics R&D program. Further, DHS stated that CBD will leverage this analysis to conduct a parallel capability analysis of the Chemical Forensics and Attribution program that addresses similar analytical and attribution needs for chemical threat agents. DHS states that the CBD Division staff has prepared newly updated Operational Requirements Documents and Strategic Plans (Fiscal Years 2017-2021) for both programs, although we have not reviewed these documents. According to DHS, the CBD Division initially identified and compiled a number of bioforensics capability needs from a review of external programs and meetings with end-users, such as the FBI, and it is identifying and grouping additional needs under three areas (science, technology and methods, and bioinformatics and data) through reviews of documents, such as the National Research Council, Science Needs for Microbial Forensics, 2014, and GAO’s report, Anthrax: Agency Approaches to Validation and Statistical Analyses Could be Improved (GAO-15-80), among others. According to DHS, the CBD Division is conducting the formal capabilities analysis using methods and best practices identified in the documents that include the DHS Instruction Manual 107-01-001-01, DHS Manual for the Operation of Joint Requirements Integration and Management System, April 21, 2016; DHS S&T "Requirements Development Guide" April 2008; and GAO’s reports, Program Evaluation: Experienced Agencies Follow a Similar Model for Prioritizing Research (GAO-11-176) and Chemical, Biological, Radiological, and Nuclear Risk Assessments: DHS Should Establish More Specific Guidance for Their Use (GAO-12-272). Finally, according to DHS, the CBD Division is consolidating and prioritizing these needs to ensure that they are in alignment and harmonized with current research goals and strategic plans within DHS, S&T, Homeland Security Advanced Research and Projects Agency, and the CBD Division. DHS plans to complete these efforts by June 30, 2017, and states that the CBD Division will ensure that the formal analysis is updated on an annual basis and is used to guide current and future bioforensics investments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Homeland Security and the Director of the FBI, appropriate congressional committees, and other interested parties. The report is also available at no charge on GAO’s website at http://www.gao.gov. If you and your staff have any questions about this report, please contact Timothy M. Persons, Ph.D. at (202) 512-6412 or personst@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs may be found on the last page of this report. Key contributors the report are listed in appendix V. For this report, we evaluated (1) the extent to which DHS and the FBI have identified gaps in their bioforensics capabilities since 2010, (2) bioforensics needs experts have identified, and (3) any actions DHS and the FBI have taken to enhance their bioforensics capabilities, including those for characterizing a novel synthetic biological weapon, and any challenges they have experienced in enhancing bioforensics capabilities. To determine the extent to which DHS and the FBI have identified gaps in their bioforensics capabilities, we reviewed agency documents and interviewed relevant agency officials about their efforts to identify such gaps since 2010, which is when the Department of Justice closed the FBI’s investigation into the 2010 anthrax case. We examined agency planning documents, such as DHS’s Strategic Plan 2015-2019 and NBFAC’s Bioforensics Roadmap for research, among others. We reviewed DHS policy and guidance, such as DHS’s Joint Requirements Integration and Management System, which formed the basis for the criteria we used to compare and assess the extent to which DHS had identified capability gaps or conducted a capability gap analysis of its bioforensics capabilities. We also interviewed agency officials, including those with DOD, to determine whether any gaps had been identified that related to bioforensics and their interactions with DHS in this regard. We developed a list of bioforensics needs that experts had identified. To do this, we identified capabilities that might be needed for bioforensics purposes from a 2014 NRC publication entitled Science Needs for Microbial Forensics: Developing Initial International Research Priorities and the 2009 National Research and Development Strategy for Microbial Forensics from the National Science and Technology Council (NSTC). We excluded capability needs in the literature that were not related to science and technology development as these would have been beyond our scope. We grouped the remaining capability needs into three broad areas: (1) science, (2) technologies and methods, and (3) bioinformatics and data. We then convened, with the assistance of the National Academy of Sciences (NAS), a 2-day meeting of 16 experts to discuss and update the capability needs we identified, including identifying issues related to these needs. To identify the experts appropriate for the meeting, we worked iteratively with NAS staff to identify and review biographical information and relevant qualifications of experts, as well as factors such as representation from academia, industry, and expertise in a range of areas. The Board on Life Sciences of NAS solicited nominations for the expert panel from its extensive contacts in the microbial forensics area. From this initial list, NAS selected experts based on their knowledge and expertise in forensics, microbiology, molecular genetics, non-genetic methods, genetic engineering, bioinformatics, statistics, and legal issues related to bioforensics. Once we came to agreement with NAS on the final list of 16 experts for the meeting, these experts were evaluated for any conflicts of interest. A conflict of interest was considered to be any current financial or other interest that might conflict with the service of an individual because it (1) could impair objectivity and (2) could create an unfair competitive advantage for any person or organization. We discussed internally all potential conflicts. The experts were determined to be free of conflicts of interest, and the group as a whole was judged to have no inappropriate biases. See appendix III for a list of the experts. The meeting was recorded and transcribed to ensure that we accurately captured the experts’ statements, and we reviewed and analyzed the transcripts as a source of evidence. We developed the session topics based on our researchable objectives and issues that were identified in our audit work. The session topics were gaps in the science underpinning bioforensics capabilities, gaps in capabilities (technologies) and methods for attribution, and gaps in bioinformatics, data and statistical Interpretation of bioforensics. We subsequently obtained their comments on the list of capability needs identified during the April 2016 meeting to update and amend it based on their input. To determine the actions DHS and the FBI had taken to enhance their bioforensics capabilities since 2010 and any challenges they encountered, we reviewed agency documents, including planning documents and research and development (R&D) efforts. We also examined DHS’s actions to enhance NBFAC’s capabilities for the long term as well as for the FBI’s casework. We reviewed DHS’s Broad Area Announcements (BAA) and Open Broad Area Announcements (OBAA) from 2008 to 2016. These are the mechanisms by which DHS solicits research to develop its bioforensics capabilities. We obtained details on contracted external R&D efforts. Deliverables included statistical models, SOPs, and genetic sequences from external researchers. To determine any challenges to enhancing bioforensics capabilities, we reviewed agency documentation, including planning and contract documentation, related literature, and our prior work on bioforensics. We interviewed agency officials and scientists, including those at DHS, DOD, and the FBI and obtained the opinions of experts in the United Kingdom, which collaborates with DHS and the FBI on bioforensics-related issues, as well as those in the United States regarding bioforensics-related challenges. We also discussed potential challenges with experts present at our expert meeting. We conducted site visits to national laboratories and academic institutions conducting research on bioforensics-related issues, including issues related to synthetic biology. These included discussions with DHS contractors, scientists in academia, officials from the U.K. Home Office, Public Health England at Porton Down, and scientists in academia regarding challenges related to bioforensics capabilities. We also interviewed some of the scientists involved in conducting research for DHS. We conducted this performance audit from July 2015 to January 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Sequence-based approach to bioforensics analysis Whole genome approach to microbial forensics Bioforensics research R&D: Whole genome approach to microbial forensics and the identification of adaptive mutations that can have forensic utility Establishing match criteria for discriminating “difference” or “sameness” in sample comparisons Developing statistically rigorous sampling strategies to acquire spatially referenced genetic information on reservoirs of these pathogens Developing bioinformatics-based analytical tools for supporting hypotheses testing regarding pathogen origin that go beyond current phylogeny-based inferential methods and can meet forensic (legal) admissibility Develop novel techniques to culture threat agents from complex environmental samples Improve dry collection and extraction strategies for forensic samples Develop detection methods for rare variant detection in a bacterial sample using ultra- high throughput next generation sequencing technology Understand dynamics of mobile elements in select agent bacteria Develop forensic genotyping methods for select agent viruses Develop novel applications of orthogonal methods to genetic characterization of biological threat agent signatures and their sample matrices Develop biased primer set design to amplify biological threat agents from complex backgrounds Production methods for ultraclean reagents Sequence data error model for next-generation and single molecule sequencing platforms Taxonomic classification of metagenomic sequences Develop and apply mathematical models for statistical confidence measurements in metagenomic analysis Develop a procedure to transport agents from BSL-3 to BSL-2 laboratories Produce whole-genome sequencing to capture the global biodiversity of human, plant, and animal pathogens (bacterial, viral, and fungal) in support of microbial forensics analysis Development and population of comparative genomic database with pathogen sequence data at the National Center for Biotechnology Information Center Products to identify select agents including C. botulinum toxins, with high confidence Next generation and novel technologies to characterize biological threat agents (the organism, the agent, or the sample matrix) for source attribution Research on the bacterial populations of select agents with critical knowledge gaps, including C. botulinum and B. anthracis (North Africa, Middle East) The names and affiliations of the experts who participated in the group meeting held April 20-21, 2016, in Washington, D.C. are as follows: Christopher Bidwell, J.D., Senior Fellow for Nonproliferation Law and Policy at the Federation of American Scientists. Bruce Budowle, Ph.D., Professor, Executive Director of Institute of Applied Genetics Molecular and Medical Genetics, University of North Texas Health Science Center. Rockne Harmon, J.D. Consultant, Instructor at U.C. Davis in the Masters in Forensic Science program. Dag Harmsen, MD, Ph.D., Professor, Head of Research, Center for Oral and Maxillofacial Surgery, Department of Periodontology, University of Munster. Molly Isbell, Ph.D., Director of Quality Assurance and Statistical Sciences. Signature Science, LLC. Dana Kadavy, Ph.D., Director of Biological Services, Signature Science, LLC. Karen Kafadar, Ph.D., Commonwealth Professor and Chair of Statistics, University of Virginia. Paul Keim, Ph.D., Regents’ Professor in Biology Cowden Endowed Chair in Microbiology Northern Arizona University’s Microbial, Genetics and Genomics Center. Northern Arizona University. Jack Melling, Ph.D. (via phone), Consultant. Stephen S. Morse, Ph.D., Professor, Epidemiology, Founding Director and Senior Resident Scientist, Center for Public Health Preparedness, Columbia University, Karen Nelson, Ph.D., President, The J. Craig Venter Institute. David Relman, MD, Thomas C. and Joan M. Merigan Professor in Medicine, and Microbiology and Immunology, Co-Director of the Center for International Security and Cooperation, Stanford University, and Chief of Infectious Diseases, the Veterans Affairs Palo Alto Health Care System. Tom Slezak, Ph.D., Associate Program Leader for Informatics for the Global Security Program Efforts, Lawrence Livermore National Laboratory. Stephen Turner, Ph.D., Assistant Professor of Public Health Sciences, University of Virginia School of Medicine. Stephan Velsko, Ph.D., Senior Scientist and Associate Program Leader Lawrence Livermore National Laboratory. Karen Wahl, Ph.D., Chemist, Pacific Northwest National Laboratory. The comments of most of these experts represented the views of the experts themselves and not the agency, university, or company with which they are affiliated. Timothy M. Persons, (202) 512-6412 or personst@gao.gov. In addition to the individuals named above, Sushil Sharma (Assistant Director), Pille Anvelt, James Ashley, Hazel Bailey, Amy Bowser, Caitlin Dardenne, Jack Melling, Jeff Mohr, Penny Pickett, Amber Sinclair, Maria Stattel, Elaine Vaurio, and Elizabeth Wood made key contributions to this report.
The ability to attribute the source of an intentionally released biological threat agent and quickly apprehend and prosecute the perpetrator is essential to our nation's safety. However, questions remain about whether DHS's and the FBI's capabilities have improved since the 2001 anthrax attack. GAO was asked to report on DHS's and the FBI's bioforensics capabilities. This report examines the (1) extent to which DHS and the FBI have identified gaps in their bioforensics capabilities since 2010, (2) bioforensics needs experts have identified, and (3) actions, if any, DHS and the FBI have taken to enhance their ability to attribute the source of a biological attack, and to identify any challenges to enhancing bioforensics capabilities. GAO's review focused on the agencies' efforts since 2010, when the FBI's investigation of the 2001 anthrax attack was closed. GAO analyzed relevant agency documents and interviewed agency officials and scientists on issues related to bioforensics. GAO also convened a meeting of experts with NAS's assistance to discuss potential bioforensics needs. The Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI) have identified some gaps in their bioforensics capabilities, but DHS has not performed a formal bioforensics capability gap analysis. It is therefore not clear whether DHS and the FBI have identified all of their capability gaps. A capability gap analysis can help identify deficiencies in capabilities and can help support the validation and prioritization of how to address the gaps. DHS and the FBI have identified capability gaps using an informal undocumented process. For example, DHS held informal meetings to seek FBI input on capability gaps associated with recent casework. Gaps identified through this informal process include the inability to (1) characterize unique, novel, and engineered agents and “unknowns” (emerging or synthetic organisms) and (2) understand and communicate uncertainty associated with analyzing complex biological samples, among other things. In the absence of a well-documented bioforensics capability gap analysis, the rationale for DHS's resource allocations, or its plans for future enhancements to existing capabilities are not clear and thus cannot ensure that resources are being targeted to the highest priority gaps. In addition to DHS and the FBI, other organizations, such as the National Research Council (NRC) of the National Academy of Sciences (NAS), and the National Science and Technology Council (NSTC) of the Office of Science and Technology Policy (OSTP), have identified potential bioforensics capability needs. These needs can generally be grouped into three areas: science, technology and methods, and bioinformatics and data. GAO also convened a meeting of experts, with the help of NAS, and these experts updated a list of potential bioforensics capability needs that NAS and OSTP had previously identified within each of these areas. Some of the needs these experts confirmed as still relevant were similar to those DHS and FBI officials have identified, while others were different. For example, like DHS and the FBI, the experts agreed that an ability to characterize genetically engineered agents was needed, but they also suggested that evaluating existing protocols, such as those for DNA sequencing, to determine whether they were validated, was needed. GAO believes that this information may be helpful to DHS and the FBI as part of any future bioforensics capability gap analysis they undertake. Since 2010, DHS has enhanced some of its bioforensics capabilities, with FBI input, by focusing on developing methods-based capabilities while maintaining agent-based capabilities. DHS has funded research and development projects addressing areas such as genome sequencing approaches, which underpin many methods-based bioforensics capabilities. DHS is also developing an in-house reference collection for use in investigations. In addition, DHS is developing the ability to characterize unique, novel agents as well as “unknowns,” such as synthetic organisms. DHS projects that some enhanced capabilities will be complete in about 2025. However, in pursuing enhancements, DHS faces several challenges, including establishing a statistical framework for interpreting bioforensics analyses and associated inferences and communicating them in a court setting, as well as obtaining suitable biological agents and DNA sequences to ensure quality references for use in investigations. GAO recommends that DHS—in consultation with the FBI—conduct a formal bioforensics capability gap analysis and update it periodically. DHS concurred with GAO's recommendation.
It is important to look at the President’s proposal in the context of the fiscal situation in which we find ourselves. After nearly 30 years of unified budget deficits, we look ahead to projections for “surpluses as far as the eye can see.” At the same time, we know that we face a demographic tsunami in the future that poses significant challenges for the Social Security system, Medicare, and our economy as a whole. In this context, we should recognize that the President uses a longer-term framework for resource allocation than has been customary in federal budgeting. Although all projections are uncertain—and they get more uncertain the farther out they go—we have long held that a long-term perspective is important in formulating fiscal policy for the nation. Each generation is in part the custodian for the economy it hands the next and the nation’s long- term economic future depends in large part on today’s budget decisions. This perspective is particularly important because our model and that of the Congressional Budget Office (CBO) continue to show that absent a change in policy, the changing demographics to which I referred above will lead to renewed deficits. This longer-term problem provides the critical backdrop for making decisions about today’s surpluses. Surpluses are the result of a good economy and difficult policy decisions. They also provide a unique opportunity to put our nation on a more sustainable path for the long term, both for fiscal policy and the Social Security program itself. Current decisions can help in several important respects: (1) current fiscal policy decisions can help expand the future capacity of our economy by increasing national savings and investment, (2) engaging in substantive reforms of retirement and health programs can reduce future claims, (3) by acting now, we have the opportunity of phasing in changes to Social Security and health programs over a sufficient period of time to enable our citizens to adjust, and (4) failure to achieve needed reforms in the Social Security and Medicare programs will drive future spending to unsustainable levels and eventually “squeeze out” most or all discretionary spending. If we let the achievement of a budget surplus lull us into complacency about the budget, then in the middle of the 21st century, we could face daunting demographic challenges without having built the economic capacity or program/policy reforms to handle them. Before turning to the context for and analysis of the President’s proposal, let me briefly describe it. The President proposes to use approximately two-thirds of the total projected unified budget surpluses over the next 15 years to reduce debt held by the public and to address Social Security’s financing problem. His approach to this, however, is extremely complex and confusing. The President proposes to “transfer” an amount equal to a portion of the projected surplus to the Social Security and Medicare trust funds. This transfer is projected to extend the solvency of Social Security from 2032 to 2049. His proposal to permit the trust fund to invest in equities is expected to further extend trust fund solvency to 2055. He calls on the Congress to work with him on program changes to get to 2075. To understand and evaluate this proposal, it is important to understand the nature of the federal budget, how trust funds fit into that budget, and the challenges of “saving” within the federal budget. The federal budget is a vehicle for making choices about the allocation of scarce resources. It is different from state budgets in ways important to this discussion. Most states use “fund budgeting” in which pension funds that are separate and distinct legal entities, build up surpluses that are routinely invested in assets outside the government (e.g., readily marketable securities held in separate funds). In contrast, the federal government’s unified budget shows all governmental transactions and all funds are available for current activities, including current-year activities of all trust funds. We cannot park our surplus in a cookie jar. The only way to save in the federal budget is to run a surplus or purchase a financial asset. When there is a cash surplus it is used to reduce debt held by the public. Therefore, to the extent that there is an actual cash surplus, debt held by the public falls. This presents a problem for any attempt to “advance fund” all or part of future Social Security benefits. Advance funding within the current program would mean increasing the flows to the SSTF. Although it is officially “off budget,” the fact remains that the SSTF is a governmental fund. In the federal budget, trust funds are not like private trust funds. They are simply budget accounts used to record receipts and expenditures earmarked for specific purposes. A private trust fund can set aside money for the future by increasing its assets. However, under current law, when the SSTF’s receipts exceed costs, they are invested in Treasury securities and used to meet current cash needs of the government. These securities are an asset to the trust fund, but they are a claim on the Treasury. Any increase in assets to the SSTF is an equal increase in claims on the Treasury. One government fund is lending to another. The transactions net out on the government’s books. Given this investment policy, any increase in the trust fund balances would only become an increase in saving if this increment were to add to the unified budget surplus (or decrease the unified budget deficit) and thereby reduce the debt held by the public. This is also the only way in which an increase in the SSTF balance could be a form of advance funding. How do these transactions affect the government’s debt? Gross federal debt is the sum of debt held by the public and debt held by governmental accounts—largely trust funds. This means that increases in the trust fund surplus will increase gross debt unless debt held by the public declines by at least the same amount. Any reform of Social Security that increases the annual SSTF surplus would increase debt held by government accounts since, under current law, any excess of revenues over benefit payments is loaned to Treasury for current needs. As a result, total government debt would go up unless these surpluses were used to reduce debt held by the public by an equivalent amount. For most people, the different types of “debt” in the federal budget may be confusing—especially since what is “good news” for a trust fund may be “bad news” for total debt and vice versa. This is so because total debt (or gross debt) is the sum of two very different types of debt—debt owed to the public and debt owed by one part of the government (general fund) to another part of the government (trust funds). Therefore, if a trust fund surplus grows faster than debt held by the public falls, total debt grows— even if the trust fund surplus is created as an attempt to “save” or to “pre- fund” some of the future benefit payments. These contradictory movements emphasize the need to differentiate between different types of debt and what they mean. Both debt held by the public and debt held by trust funds are important--but for different reasons. Analytically, therefore, what is most important is not whether total debt increases but rather the reasons behind the increase--does it represent an attempt to “advance fund” through substantive reform or merely the promise of future resources? Debt held by the public and debt held by trust funds represent very different concepts. Debt held by the public approximates the federal government’s competition with other sectors in the credit markets. This affects interest rates and private capital accumulation. Further, interest on debt held by the public is a current burden on taxpayers. Reducing this burden frees up capacity to meet future needs. In contrast, debt held by trust funds performs an accounting function and currently represents the cumulative annual surpluses of these funds (i.e., excess of receipts over disbursements plus accrued interest). Importantly, debt held by the SSTF does not represent the actuarial present value of expected future benefits for either current or future participants. Nor does this debt have any of the economic effects of borrowing from the public. It is not a current transaction of the government with the public; it does not compete with the private sector for available funds in the credit market. It reduces the need to borrow from the public and so may hold down interest rates. Unlike debt held by the public, debt held by trust funds does not represent an immediate burden on current taxpayers. Rather, it is a claim on future resources. The surplus is held in Treasury securities that give the SSTF a claim on the Treasury equal to the value of those securities. When the securities have to be redeemed, the Treasury must come up with the cash. At that time, taxpayers will see some combination of a lower surplus, lower spending, higher taxes, and/or greater borrowing from the public. If borrowing from the public is increased to cover this cash need, there could be upward pressure on interest rates. In addition, because debt held by the trust fund is not equal to future benefit payments--it is not a measure of the unfunded liability of the current system--it cannot be seen as a measure of this future burden. Nevertheless, it provides an important signal of the existence of this burden. Whether the debt constitutes a new economic burden for the future or merely recognizes an existing one depends on whether these currently promised benefits would be paid even in the absence of the securities. This information is important to understand the President’s proposal because, in large part, he proposes a set of transactions that, in effect, trade debt held by the public for debt held by the SSTF. By running a cash surplus over the next 15 years, debt held by the public falls. To “save” this surplus, the President proposes to “transfer” it to the trust fund in the form of increased Treasury securities. Under his proposal, debt held by the public falls, but debt held by the trust funds increases. Because he shows the transfer as a subtraction from the surplus—a new budgetary concept— he shows no surplus. As a result, he attempts to save some of the projected surplus by hiding it. The mechanics of the proposed transfer of surpluses to the SSTF are complex and difficult to follow. Few details have been made available, and there is conflicting information on exactly how it would work. Figures 1 and 2 are flowcharts representing our best understanding of the Social Security portion of this transfer. Since it is impossible to understand the changes proposed by the President without understanding the present system, figure 1 shows the flows under the current system. Under current law, annual cash flow surpluses (largely attributable to excess payroll taxes over benefits payments and program expenses) are invested in Treasury securities. This excess “cash” is commingled with other revenues and used to finance other governmental activities. In this way, SSTF surpluses have helped and continue to help finance the rest of the government. This year, the SSTF surplus is expected to exceed the general fund deficit so there is also a surplus in the unified budget. Over the entire 15-year period, more than half of the projected unified surplus is composed of Social Security surpluses. Absent any change in policy, these unified surpluses will be used to reduce the debt held by the public. Under the President’s proposal, this would continue. However, as shown in figure 2, at the point where total tax receipts are allocated to pay for government activities, a new financing step would be added to “transfer” a portion of the unified budget surpluses to the Social Security and Medicare trust funds. The unified budget would do this by providing a new set of securities for these trust funds. However, the excess cash would still be used to reduce the debt held by the public. In essence, this exchanges debt held by the public for debt held by the trust funds. While there are many benefits to reducing publicly held debt, it is important to recognize that under the current law baseline—i.e., with no changes in tax or spending policy—this would happen without crediting additional securities to the trust funds. The administration has defended this approach as a way of assuring both a reduction in debt held by the public and giving Social Security first claim on what they call the “debt-reduction dividend” to pay future benefits. However, issuing these additional securities to the SSTF is a discretionary act with major legal and economic consequences for the future. Some could view this as double counting—or double-crediting. Importantly, to the extent it appears that way to the public, it could undermine confidence in a system that is already difficult to explain. However, the debate over double counting focuses on the form of the proposal rather than its substance. Although form is important when it interferes with our ability to understand the substance—and I think this proposal falls into that trap—the important debate must be on the substance of the proposal. This proposal represents a fundamental shift in the way the Social Security program is financed. It moves it away from payroll financing toward a formal commitment of future general fund resources for the program. This is unprecedented. Later in my statement, I will discuss the implications of this proposal for overall fiscal policy and for the Social Security program. The President’s proposals would have the effect of reducing debt held by the public from the current level of 44 percent of Gross Domestic Product (GDP) to 7 percent over the 15-year period. The President notes that this would be the lowest level since 1917. Nearly two-thirds of the projected unified budget surplus would be used to reduce debt held by the public. Because the surplus is also to be used for other governmental activities, the amount of debt reduction achieved would be less than the baseline (i.e., a situation in which none of the surplus was used), but nonetheless the outcome would confer significant benefits to the budget and the economy. Our previous work on the long-term effects of federal fiscal policy has shown the substantial benefits of debt reduction. One is lowering the burden of interest payments in the budget. Today, net interest represents the third-largest “program” in the budget, after Social Security and Defense. Interest payments, of course, are a function of both the amount of debt on which interest is charged and the interest rate. Thus, at any given interest rate, reducing publicly held debt reduces net interest payments within the budget. For example, CBO estimates that the difference between spending the surplus and saving the surplus is $123 billion in annual interest payments by 2009--or almost $500 billion cumulatively between now and then. Compared to spending the entire surplus, the President’s proposal would also substantially reduce projected interest payments. Lower interest payments lead to larger surpluses; these in turn lead to lower debt which leads to lower interest payments and so on: the miracle of compound interest produces a “virtuous circle.” The result would be to provide increased budgetary flexibility for future decisionmakers who will be faced with enormous and growing spending pressures from the aging population. For the economy, lowering debt levels increases national saving and frees up resources for private investment. This in turn leads to increased productivity and stronger economic growth over the long term. Over the last several years, we and CBO have both simulated the long-term economic results from various fiscal policy paths. These projections consistently show that reducing debt held by the public increases national income over the next 50 years, thereby making it easier for the nation to meet future needs and commitments. Our latest simulations done for this committee, as shown in figure 3, illustrate that any path that saves all or a significant share of the surplus in the near term would produce demonstrable gains in per capita GDP over the long run. This higher GDP in turn would increase the nation’s economic capacity to handle all its commitments in the future. Under the President’s proposal, debt held by trust funds goes up more rapidly than debt held by the public falls, largely due to these additional securities transferred to the trust funds. Gross debt, therefore, increases. It is gross debt—with minor exceptions—that is the measure that is subject to the debt limit. The current limit is $5.95 trillion. Under the President’s plan, the limit would need to be raised sometime during 2001. Under either the CBO or the Office of Management and Budget baseline (i.e., save the entire surplus), the limit would not need to be raised during at least the next 10 years. Since other proposals to use the surplus would also bring forward the time when the debt limit would have to be raised, the impact of the President’s proposal on debt is in part a “compared to what?” question. In figure 4, we show the debt subject to limit under the baseline, the President’s proposal, and a hypothetical path we have labeled “on-budget balance.” Figures 5 and 6 below compare the composition of debt under the same three paths: the baseline (save the entire surplus), the President’s proposal (including both the Social Security proposal and the other spending), and “on-budget balance.” Figure 5 shows debt held by the public under all three scenarios, and figure 6 shows debt held by governmental accounts. As figure 5 shows, debt held by the public falls under all three scenarios. Since the baseline assumes the entire surplus is devoted to reducing debt held by the public, it shows the greatest drop. Under the “on-budget balance” path there are no tax cuts or spending increases until there is an on-budget balance in 2001 while under the President’s proposal spending increases and tax cuts are front-loaded. As a result, the President’s proposal is projected to reduce debt held by the public less than the “on- budget balance” path during these 10 years. Figure 6 shows the impact of the President’s proposal to transfer securities to the SSTF. The projections for debt held by government accounts are the same for the baseline and the “on-budget balance” paths since neither changes current law. Under the President’s proposal, however, debt held by the SSTF increases as securities are transferred to it. This leads to the increase shown in figure 6. While reducing debt held by the public appears to be a centerpiece of the proposal—and has significant benefits—as I noted above, the transfer of unified surpluses to Social Security is a separate issue. The transfer is not technically necessary: whenever revenue exceeds outlays and the cash needs of the Treasury—whenever there is an actual surplus—debt held by the public falls. The President’s proposal appears to be premised on the belief that the only the way to sustain surpluses is to tie them to Social Security. He has merged two separate questions: (1) how much of the surplus should be devoted to reducing debt held by the public and (2) how should the nation finance the Social Security program in the future. Let me turn now to the question of Social Security financing. The President proposes two changes in the financing of Social Security: a pledge of general funds in the future and a modest amount of investment in equities. Both of these represent major shifts in approach to financing the program. By, in effect, trading debt held by the public for debt held by the trust funds, the President is committing future general revenues to the Social Security program. This is true because the newly transferred securities would be in addition to any buildup of payroll tax surpluses. Securities held by the SSTF have always represented annual cash flows in excess of benefits and expenses, plus interest. Under the President’s proposal, this would no longer continue to be true. The value of the securities held by the SSTF would be greater than the amount by which annual revenues plus interest exceed annual benefits and expenditures. This means that for the first time there is an explicit general fund subsidy. This is a major change in the underlying theoretical design of this program. Whether you believe it is a major change in reality depends on what you assume about the likely future use of general revenues under the current circumstances. For example, current projections are that in 2032 the fund will lack sufficient resources to pay the full promised benefits. If you believe that this shortfall would—when the time came—be made up with general fund moneys, then the shift embedded in the President’s proposal merely makes that explicit. If, however, you believe that there would be changes in the benefit or tax structure of the fund instead, then the President’s proposal represents a very big change. In either case, the question of bringing significant general revenues into the financing of Social Security is a question that deserves full and open debate. The debate should not be overshadowed by the accounting complexity and budgetary confusion of the proposal. One disconcerting aspect of the President’s proposal is that it appears that the transfers to the trust fund would be made regardless of whether the expected budget surpluses are actually realized. The amounts to be transferred to Social Security apparently would be written into law as either a fixed dollar amount or as a percentage of taxable payroll rather than as a percentage of the actual unified surplus in any given year. These transfers would have a claim on the general fund even if the actual surplus fell below the amount specified for transfer to Social Security—and that does present a risk. However, it is important to emphasize that any proposal to allocate surpluses is vulnerable to the risk that those projected surpluses may not materialize. Proposals making permanent changes to use the surplus over a long period of time are especially vulnerable to this risk. The history of budget forecasts should remind us not to be complacent about the certainty of these large projected surpluses. In its most recent outlook book, CBO compared the actual deficits or surpluses for 1988-1998 with the first projection it produced 5 years before the start of each fiscal year. Excluding the estimated impact of legislation, CBO says its errors averaged about 13 percent of actual outlays. Such a shift in 2004 would mean a surplus $250 billion higher or lower; in 2009, the swing would be about $300 billion. Accordingly, we should consider carefully any permanent commitments that are dependent on the realization of a long- term forecast. Under current law, the SSTF is required to invest only in securities that are issued or backed by the Treasury. The President proposes changing current law to allow the SSTF to invest a portion of its assets in equities. His proposal calls for the fund to gradually invest 15 percent of its total assets in the equity market. According to the administration’s estimates, the SSTF’s equity holdings would represent only a small portion—about 4 percent—of the total equity market. To insulate investment decisions from political considerations, the administration proposes investing passively in a broad-based stock index and creating an independent board to oversee the portfolio. Last year, we reported on the implications of allowing the SSTF to invest in equities. In that report, we concluded that stock investing offers the prospect of higher returns in exchange for greater risk. We found that, by itself, stock investing was unlikely to solve Social Security’s long-term financing imbalance but that it could reduce the size of other reforms needed to restore the program’s solvency. We also concluded that investing in a broad-based index would help reduce, but not eliminate, the possibility of political influence over stock selections. However, the issue of how to handle stock voting rights could prove more difficult to resolve. If the government voted its shares, it would raise concerns about potential federal involvement in corporate affairs. If the government chose not to vote, it would affect corporate decision-making by enhancing the voting power of other shareholders or investment managers. The model applicable to passive private sector investment managers under the Employee Retirement Income Security Act may be relevant to the resolution of this issue. Stock investing would have approximately the same impact on national saving as using the same amount of money to reduce debt held by the public. Both approaches would add about the same amount of funds to private capital markets, meaning that national saving would essentially be unchanged. From a budget accounting standpoint, they are not the same. Under current scoring rules the purchase of equities would be counted as an outlay, even though it is a financial transaction, because it is a transfer of funds from a governmental entity to a nongovernmental entity. The proposal apparently would change that. The administration proposes to show the entire transfer to the SSTF as a reduction in the surplus and the equity purchases would be part of that. The purchase of equities has another financial impact: since part of the surplus would be used to purchase equities, debt held by the public would be reduced less in the near term than if that amount went to reduce publicly held debt. However, in the future, claims on the Treasury would be lower because the program would rely in part on stock sales to pay benefits. Although the dilemma we are facing of whether and how to save for the future is a very difficult one, it is not unique. A look at other democracies shows that surpluses are difficult to sustain. However, several nations have succeeded in sustaining surpluses. In those nations, political leaders were able to articulate a compelling rationale to justify the need to set aside current resources for future needs. For example, those countries that have come to the conclusion that the debt burden matters make it an explicit part of their fiscal decision-making process. Australia, New Zealand, and the United Kingdom all attempt to define prudent debt levels as a national goal to strive for. These debt goals can prove important in times of surplus. New Zealand, for example, used its debt goals as justification for maintaining spending restraint and attempting to run sustained surpluses. They promised that once they met their initial debt target they would give a tax cut. Importantly, when they hit that specified debt target, they delivered on their promise of tax cuts. Other countries have saved for the future by separating their pension or Social Security-related assets from the rest of the government’s budget. For example, the Canadian Pension Plan is completely separate from both federal and provincial budgets. When the fund earns surplus cash, it is invested in provincial debt securities and, starting this year, in the stock market. Sweden also maintains a pension fund outside the government’s budget and invests assets in stocks and bonds. Norway may be the most dramatic example of setting aside current surpluses to address long-term fiscal and economic concerns. Norway faces the two-edged problem of a rapidly aging population and declining oil revenues—a significant source of current government revenue. To address these long-term concerns, Norway started setting aside year-end budget surpluses in 1996 to be invested in foreign stocks and bonds. Their express intention is to draw down these assets to pay for the retirement costs for their baby boomers. It should be noted that other nations that have attempted to directly address their debt and pension problems have usually done so during or shortly after a fiscal or economic crisis. Fortunately, we do not have that problem. Instead, we have a unique opportunity to use our current good fortune to meet the challenges of the future. Finally, it is important to note that the President's proposal does not alter the projected payroll tax and benefit imbalances in the Social Security program. In addition, it does not come close to “saving Social Security.” Benefit costs and revenues currently associated with the program will not be affected by even 1 cent. Figure 7, which shows Social Security's payroll tax receipts and benefit payments, illustrates this point. Without the President's proposal, payroll tax receipts will fall short of benefit payments in 2013; with the President's proposal, payroll tax receipts also fall short of benefit payments in 2013—the graph doesn't change at all. Under the President’s proposal, expected stock market returns would be used to fill part of this gap, but from 2013 on the trust funds will need cash from redeemed Treasury securities, whether or not the President's proposal is adopted. Under the President’s proposal, the changes to the Social Security program will be more perceived than real: although the trust funds will appear to have more resources as a result of the proposal, in reality, nothing about the program has changed. The proposal does not represent Social Security program reform, but rather a different means to finance the current program. Although the President has called for bipartisan cooperation to make programmatic changes, one of the risks of his proposal is that the additional years of financing it provides could very well diminish the urgency to achieve meaningful changes in the program. This would not be in the overall best interests of the nation. To achieve long-term solvency and sustainability, the Social Security program itself must be reformed. The demographic trends that are driving the program's financial problems affect the program well into the future. The impending retirement of the baby boom generation is the best known of these trends, but is not the only challenge the system faces. If this were so, perhaps a one-time financing strategy could be sufficient. But people are retiring earlier, birth rates have fallen, and life expectancies are increasing—all these factors suggest that Social Security's financial problems will outlive the baby boom generation and continue far into the future. These problems cannot be addressed without changes to the Social Security program itself. Changes to the Social Security system should be made sooner rather than later. The longer meaningful action is delayed, the more severe such actions will have to be in the future. Changes made today would be relatively minor compared to what could be necessary years from now, with less time for the fiscal effects of those changes to build. Moreover, acting now would allow any benefit changes to be phased in gradually so that participants would have time to adjust their saving or retirement goals accordingly. It would be tragic indeed if this proposal, through its budgetary accounting complexity, masked the urgency of the Social Security solvency problem and served to delay much-needed action. There is another reason to take action on Social Security now. Social Security is not the only entitlement program needing urgent attention. In fact, the issues surrounding the Medicare program are much more urgent and complex. Furthermore, the many variables associated with health care consumption and Medicare costs and the personal emotions associated with health decisions make reform in this program particularly difficult. To move into the future without changes in Social Security or health programs is to envision a very different role for the federal government. Assuming no financing or benefit changes, our long-term model (and that of CBO) shows a world in 2050 in which Social Security and health care absorb an increasing share of the federal budget. (See figure 8.) Budgetary flexibility declines drastically and there is increasingly less room for programs for national defense, the young, infrastructure, and law enforcement—i.e., essentially no discretionary programs at all. Eventually, again assuming no program or financing changes, Social Security, health, and interest take nearly all the revenue the federal government takes in by 2050. This is true even if we assume that the entire surplus is saved and these continued surpluses reduce interest from current levels. As shown in figure 9, the picture below is even more dramatic if we assume the entire surplus is used. In that scenario, lower GDP and higher interest payments lead to a world in which revenues cover only Social Security, health, and interest in 2030. And in 2050 revenues don’t even cover Social Security and health! Although views about the role of government differ, it seems unlikely that many would advocate a government devoted solely to sending checks and health care reimbursements to the elderly. Let us address Social Security for the long term today so that the nation can turn its attention to these other more pressing and difficult issues early in the new millenium. Look again at figure 8: Social Security is not the fastest growing portion of those bars—health grows faster. Much remains to be done in reforming entitlement programs, and engaging in meaningful Social Security reform would represent an important and significant first step. The Congress and the administration, working together, can find a comprehensive and sustainable solution to this important challenge. I recognize, though, that restoring Social Security solvency is not easy. However, it is easy lifting compared to what faces us in connection with the Medicare program. Ultimately, any reforms to Social Security will address not only the relatively narrow question of how to restore solvency and assure sustainability but will also go to the larger question of what role Social Security and the federal government should play in providing retirement income. There are many proposals being made to address these questions; choosing among them will involve difficult and complex choices, choices that will be critically important to nearly every American’s retirement income. In my view, progress is likely to be greatest if we see these choices not as “either/or” decisions but rather as an array of possibilities along a continuum. Combining elements of different approaches may offer the best chance to produce a package that addresses the problem comprehensively for the long term in a way that is meaningful and acceptable to the American people. For example, such a continuum may identify individual accounts that could serve as a voluntary or mandatory supplement to a financially sound and sustainable base defined benefit structure. In addition, master trust principles can be used to provide for collective investment of base defined benefit and individual account funds in ways that would serve to prevent political manipulation of investments. In order to help structure these choices, I would suggest five criteria for evaluating possible Social Security proposals. Sustainable solvency: A proposal should eliminate the gap between trust fund resources and expenditures over 75 years, and have the ability to sustain a stable system beyond that time period. Equity: A proposal should create no "big winners" or "big losers." Those who are most reliant on Social Security for retirement and disability income should continue to receive adequate support; those who contribute the most would also benefit from participation in the system, and intergenerational equity would improve. Adequacy: Consistent with Social Security’s social insurance feature, a proposal should provide for a certain and secure defined benefit promise that is geared to providing higher replacement rates for lower-income workers and reasonable minimum benefits to minimize poverty among the elderly. Feasibility: A proposal should be structured so that it could be implemented within a reasonable time period, it could be readily administered, and the administrative costs associated with it would be reasonable. Transparency: A proposal should be readily understandable to the general public and, as a result, generate broad support. Applying such criteria will require a detailed understanding of the possible outcomes and issues associated with the various elements of proposals. We are working to provide the data, information, and analysis needed to help policymakers evaluate the relative merits of various proposals and move toward agreement on a comprehensive Social Security reform proposal. Budget surpluses provide a valuable opportunity to capture significant long-term gains to both improve the nation’s capacity to address the looming fiscal challenges arising from demographic change and aid in the transition to a more sustainable Social Security program. The President’s proposal may prompt a discussion and decision on both how much of our current resources we want to save for the future and how we can best do so. The President’s proposal is both wide ranging and complex, and it behooves us to clarify the consequences for both our national economy and the Social Security program. A substantial share of the surpluses would be used to reduce publicly held debt, providing demonstrable gains for our economic capacity to afford our future commitments. In this way, the proposal would help us, in effect, prefund these commitments by using today’s wealth earned by current workers to enhance the resources for the next generations. Saving a good portion of today’s surpluses can help future generations of workers better afford the billowing costs of these commitments, but this is only one side of the equation. We must also reform the programs themselves to make these commitments more affordable. Even if we save the entire surplus over the next 50 years Social Security and health programs will double as a share of the economy and consume nearly all federal revenues--essentially crowding out all other spending programs. Thus, it is vital that any proposal to expand economic growth be accompanied by real entitlement reform. The transfer of surplus resources to the trust fund, which the administration argues is necessary to lock in surpluses for the future, would nonetheless constitute a major shift in financing for the Social Security program, but it would not constitute real Social Security reform because it does not modify the program’s underlying commitments for the future. Moreover, the proposed transfer may very well make it more difficult for the public to understand and support the savings goals articulated. Several other nations have shown how debt reduction itself can be made to be publicly compelling, but only you can decide whether such an approach will work here. I am very concerned that enhancing the financial condition of the trust fund alone without any comprehensive and substantive program reforms may, in fact, undermine the case for fundamental program changes. In addition, explicitly pledging federal general revenues to Social Security will limit the options for dealing with other national issues. The time has come for meaningful Social Security reform. Delay will only serve to make the necessary changes more painful down the road. We must be straight with the American people; achieving the goal of “saving Social Security” will require real options to increase program revenues and/or decrease program expenses. There is no “free lunch.” After all, we have much larger and more complex challenges to tackle like the Medicare program. As you consider various proposals, you should consider the following questions. How much of the unified budget surplus should go to debt reduction versus other priorities? If we are to use some portion of the surplus to reduce publicly held debt, is the President’s proposed approach the way to do this? Should Social Security be financed in part by general revenues? Should the SSTF invest in the stock market? How can we best assure the solvency, sustainability, equity, and integrity of the Social Security program for current and future generations of Americans? How can we best increase real savings for the future? How can we best assure the public’s understanding of and support for any comprehensive Social Security reform proposal? We at GAO stand ready to help you address both Social Security reform and other critical national challenges. Working together, we can make a positive and lasting difference for our country and the American people. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the President's proposal for addressing social security and use of the budget surplus. GAO noted that: (1) the President's proposal: (a) reduces debt held by the public from current levels, thereby also reducing new interest costs, raising national saving, and contributing to future economic growth; (b) fundamentally changes social security financing by promising general funds in the future by, in effect, trading publicly held debt for debt held by the Social Security Trust Fund and by investing some of the trust fund in equities with the goal of capturing higher returns over the long term; (c) does not have any effect on the projected cash flow imbalance in the social security program's taxes and benefits which begins in 2013; and (d) does not represent a social security reform plan and does not come close to saving social security; (2) budget surpluses provide a valuable opportunity to capture significant long-term gains to both improve the nation's capacity to address the looming fiscal challenges arising from demographic change and aid in the transition to a more sustainable social security program; (3) the President's proposal may prompt a discussion and decision on both how much of the current resources the nation wants to save for the future and how it can best do so; (4) a substantial share of the surpluses would be used to reduce publicly held debt, providing demonstrable gains for the nation's economic capacity to afford future commitments; (5) in this way, the proposal would help the nation, in effect, prefund these commitments by using today's wealth earned by current workers to enhance the resources for the next generations; (6) the transfer of surplus resources to the trust fund, which the administration argues is necessary to lock in surpluses for the future, would nonetheless constitute a major shift in financing for the social security program, but it would not constitute real social security reform because it does not modify the program's underlying commitments for the future; (7) moreover, the proposed transfer may very well make it more difficult for the public to understand and support the savings goals articulated; (8) several other nations have shown how debt reduction itself can be made to be publicly compelling, but only Congress can decide whether such an approach will work in the United States; (9) GAO is very concerned that enhancing the financial condition of the trust fund alone without any comprehensive and substantive program reforms may in fact undermine the fundamental changes; and (10) explicitly pledging federal general revenues to social security will limit the options for dealing with other national issues.
In response to tribes’ concerns that BIA had not consistently provided them with statements on their account balances, that their trust fund accounts had never been reconciled, and that BIA planned to contract with a third party for management of trust fund accounts, the Congress established the requirement in the Interior Department’s fiscal year 1987 supplemental appropriations act that BIA reconcile trust fund accounts before they could be transferred to a third party. In Interior’s fiscal year 1990 appropriations act, the Congress required that BIA reconcile the accounts to the earliest possible date. In a March 1990 decision interpreting this requirement, we concluded that “Congress’s evident purpose is to obtain, to the greatest extent possible, reliable baseline balances in the various accounts.” In 1990, BIA decided to address the legislative requirement that it reconcile trust fund accounts by contracting for a reconstruction of historical transactions, to ensure that tribal and individual accounts were reconciled as accurately as possible back to the earliest possible date based on available records. In May 1991, BIA awarded a reconciliation contract valued at $12 million over a 5-year period to a major independent public accounting firm. Following a preliminary assessment of the feasibility of reconciling accounts to the earliest date possible, BIA’s reconciliation contractor reported in March 1992 that records were available to research tribal accounts for fiscal years 1973 through 1992. BIA’s contractor also reported that due to the level of effort and associated cost and the potential for missing documentation, it was not feasible to reconcile Individual Indian Money (IIM) accounts for individual Indians. In addition, BIA determined that its contractor should use alternative procedures, rather than specific transaction testing, to verify tribal account balances where insufficient documents were available to reconstruct the accounting or where more efficient approaches were identified. In addition to requiring that the accounts be reconciled to the earliest possible date, Interior’s fiscal year 1990 appropriations act required an independent certification that the reconciliation resulted in the most complete reconciliation possible. In September 1993, BIA awarded a certification contract for $1.2 million to another major independent accounting firm to verify that the reconciliation procedures were performed in accordance with the reconciliation contract. BIA terminated the certification contract as of November 30, 1995. As of February 14, 1996, BIA had obligated over $21 million for the 5-year reconciliation effort, including $18.3 million for reconciliation work and $2.8 million for certification work. The American Indian Trust Fund Management Reform Act of 1994 required the Secretary of the Interior to provide tribes with reconciled account statements as of September 30, 1995. To meet this requirement, BIA included reconciled account statements, which it prepared for fiscal years 1993 through 1995, in the reconciliation report package for each tribe. The act also requires the Secretary of the Interior to report to the Senate Committee on Indian Affairs and the House Committee on Resources by May 31, 1996, (1) methodologies used to reconcile the accounts, (2) whether tribes accept or dispute their reconciled account balances, and (3) how the Secretary plans to resolve any disputes. BIA’s Office of Trust Funds Management (OTFM) was responsible for carrying out the reconciliation and certification effort. As of the end of fiscal year 1995, OTFM reported that it managed and accounted for approximately $2.6 billion in Indian trust funds—about $2.1 billion for about 1,500 tribal accounts and about $453 million for nearly 390,000 IIM accounts. The balances in the trust fund accounts have accumulated primarily from payments of claims; oil, gas, and coal royalties; land use agreements; and investment income. Fiscal year 1995 reported receipts to the trust accounts from these sources totaled about $1.9 billion, and disbursements from the trust accounts to tribes and individual Indians totaled about $1.7 billion. To provide our observations on the results of the reconciliation and certification efforts, we reviewed reconciliation and certification contracts and issue papers, contractor status reports and memoranda, and prototype reconciliation report drafts. We met with Interior, BIA, and Office of Management and Budget (OMB) officials, including BIA’s Special Assistant to the Deputy Commissioner of Indian Affairs for the reconciliation project (Reconciliation Project Manager), Interior’s Special Trustee for American Indians, and representatives of the independent accounting firms that BIA contracted with to perform the reconciliation and certification to discuss our concerns about the reconciliation effort and the certification contract. To obtain tribes’ views on the reconciliation and certification efforts, we contacted representatives of the Intertribal Monitoring Association (ITMA), which represents a number of tribal account holders, and representatives of non-ITMA member tribes. We attended BIA’s February 1996 National Meeting in Albuquerque, New Mexico, to observe Interior’s and BIA’s presentation on the reconciliation procedures, reports, and results and the tribes’ responses. We conducted our work between April 1995 and March 1996 at BIA’s headquarters in Washington, D.C., and its Office of Trust Funds Management in Albuquerque, New Mexico. Our work was performed in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Interior Department’s Special Trustee for American Indians. On April 2, 1996, we received written comments from BIA’s Reconciliation Project Manager. These comments are discussed in the “Agency Comments and Our Evaluation” section of this report. While we are not reprinting these comments, copies are available from GAO. Although BIA identified about 20,000 boxes of accounting documents and lease records and spent about 5 years attempting to reconcile tribal trust accounts, sufficient records were not available to fully reconcile the accounts. For example, BIA’s reconciliation contractor verified 218,531 of tribes’ noninvestment receipt and disbursement transactions totaling $15.3 billion, or 86 percent, of the $17.7 billion in transactions that were recorded in the general ledger. However, due to missing records, the contractor was not able to verify 32,901 of these transactions totaling $2.4 billion (gross). In addition, BIA was not able to determine the total amount of receipts and disbursements that should have been recorded and had no reconciliation procedure to address the completeness of the accounting records. BIA’s contractor also specifically tested $21.3 billion, or 16 percent, of the investment transactions. According to BIA’s Reconciliation Project Manager, in order to achieve efficiencies, BIA decided to verify investment activity by asking its contractor to perform alternative procedures to review interest yields. BIA performed related procedures to reconcile investment system balances and BIA’s contractor identified deposit lag times (for information purposes only) on collections. However, the completeness of these procedures was also impacted by missing records. BIA’s contractor reconciled 692 leases with collections greater than $25,000 and collections for 227 months for 213 timber sales contracts for certain tribes. BIA’s contractor reported that $601 million, or 99 percent, of lease receipts tested were verified. However, this represented only 10.7 percent of the leases originally identified for testing. Because BIA did not know the universe of leases, it could not determine total lease revenue expected to be collected during a given period and, therefore, it could not reliably determine the percent of lease revenue that had been tested. Further, not all of the reconciliation procedures specified in BIA’s reconciliation contract were performed and others could not be completed due to missing records, the lack of an audit trail through BIA’s systems, and time and cost constraints. For example, BIA did not reconcile its subsidiary system to its general ledger and BIA could not complete the reconciliation of its Finance System (general ledger) transactions to Treasury records. Also, as stated earlier, because of the cost and level of effort and the potential lack of supporting documents, reconciliations of about 300,000 individual Indian accounts were not performed and no alternative procedures were developed. Appendix I contains detailed information on reconciliation procedures and results. In January 1996, BIA provided to each tribe a report package on the results of the reconciliation procedures performed by its contractor for fiscal years 1973 through 1992, BIA’s reconciliations for fiscal years 1993 through 1995, and a transmittal letter which described the information provided and BIA’s plans to meet with tribes to discuss the reconciliation results. We reviewed several drafts of BIA’s reconciliation report package and provided oral and written comments and suggestions to OTFM between May 1995 and January 1996. We suggested that the usefulness of their report could be increased by clarifying technical terms so that the report would be more understandable to nonaccountants. We also suggested that BIA identify methodological changes addressed in contract modifications and issue papers and disclose scope limitations as part of the reconciliation report package. BIA’s reconciliation contractor clarified technical language in the agreed-upon procedures report and stated the scope of the work performed. However, BIA did not disclose in the report package to tribes the procedures specified in the reconciliation contract which were not performed or could not be completed and the reasons. In addition, for the procedures which were performed, BIA did not fully disclose scope limitations or changes in methodologies, such as accounts and time periods that were not covered and alternative source documents used. While some scope limitations were discussed at the February 1996 National Meeting with tribes, BIA did not explain all methodological changes resulting from contract modifications and issue papers. BIA modified the reconciliation contract 29 times and approved approximately 140 issue papers—including about 90 which addressed changes in tribal reconciliation scope and procedures. For example, issue papers determined that certain adjustments relating to transfers would be reflected as of their general ledger posting date rather than the date that the original transaction occurred. Using the general ledger posting date instead of the transaction date could impact tribal interest calculations. Other issue papers determined that certain procedures could not be performed for specific tribes due to missing records. We suggested that substantial changes in the scope or procedures as a result of contract modifications and issue papers be explained in the report package transmitted to the tribes. BIA considered providing issue papers to tribes on compact disk. However, the Reconciliation Project Manager told us that due to cost considerations, BIA decided instead that these issue papers would be made available to tribes at the OTFM in Albuquerque, New Mexico, or that tribes could request copies of specific documents by mail. According to OTFM officials, a reconciliation report package was issued to each of 269 tribes in January 1996. The reports included summary results for all tribes and specific results on each tribe’s accounts. In addition, on March 8, 1996, OTFM issued reports to 112 tribes on their portions of multitribe judgment awards. These judgment awards resulted from claims against the federal government. However, OTFM’s Reconciliation Project Manager told us that OTFM may not be able to issue reports to all of the tribes involved in multitribe awards because some are no longer federally recognized as tribal entities, and BIA may not be able to locate the tribes or their descendants. The fiscal year 1990 appropriations act required a separate, independent certification that the accounts had been reconciled and audited to the earliest possible date and that the results were the most complete reconciliation possible. The certification requirement was imposed to obtain independent assurance of the accuracy and reliability of the reconciled balances. After the certification contract was awarded in September 1993, congressional committees and several tribes expressed concern about the objective of BIA’s certification contract because BIA limited the scope of the certification contract to ensure only that the reconciliation effort was performed in accordance with the reconciliation contract. During the summer of 1995, Interior, OTFM, OMB, and the reconciliation and certification contractors’ staff worked on modifying the certification contract to attempt to more fully explain each of the reconciliation procedures that the certification contractor was to verify. To meet the act’s certification requirement, we suggested that the certification contract focus on the extent to which the reconciliation procedures resulted in as complete an accounting as possible. Interior and OTFM officials told us that they believed that the reconciliation procedures, as designed, provided reasonable assurance that the account balances were accurate and that contractor certification on this point was not needed. Therefore, the certification contract focused on verifying that the reconciliation procedures specified in the reconciliation contract had been performed and no independent assessment of completeness was required. In October 1995, the certification contractor estimated that it would require an additional 6 months and $1.2 million to complete the certification work. According to OTFM’s Reconciliation Project Manager, only $600,000 was available to cover the additional work and it was not clear that the work could be completed in 6 months. As a result, Interior and BIA decided to terminate the certification effort as of November 30, 1995, and to obtain a status report from the contractor. Because the contract was terminated, BIA’s certification contractor did not complete its verification that the procedures in the reconciliation contract and related issue papers were performed. The certification contractor issued a status letter on November 30, 1995, which communicated the certification contract scope, methodologies, and preliminary results of 30 segments of the reconciliation work, including specific transaction testing, investment analyses, systems reconciliations, and pilot tribe reconciliation work. The status letter identified the following: 16 segments where errors or inconsistencies were reported to OTFM, including 8 segments with numerous errors and inconsistencies and 3 segments with methodological concerns; 12 segments where work was not performed by the certification contractor, or information was insufficient to provide results; and 2 segments where no errors were identified. OTFM’s Reconciliation Project Manager told us that the reconciliation contractor had addressed all of the issues and questions raised by the certification contractor as of November 30, 1995, and that BIA was following up to obtain clarification on whether the certification contractor had communicated all findings to BIA. Because the certification work was performed while the reconciliation was in process and the certification procedures were not completed, the usefulness of the status letter is limited. In February 1996, OTFM and reconciliation contractor officials conducted a 2-day meeting with tribes in Albuquerque, New Mexico, to discuss the reconciliation reports and results. BIA had invited all 269 tribes that had received reconciliation reports, and representatives of 79 of these tribes attended the national meeting. At the meeting, OTFM and reconciliation contractor officials summarized the reconciliation results and answered tribes’ questions. Tribes raised questions about the (1) adequacy of the objectives and the scope of the reconciliation project, (2) effect of missing documents on the accuracy of the reconciled account balances, and (3) thoroughness of procedures used for testing the accuracy of recorded investment interest income. Also, tribal representatives said they were concerned that the reconciliation procedures did not provide the same level of assurance as an audit, that BIA rather than the reconciliation contractor had performed some portions of the reconciliation, and that the number of missing records further limited the assurance provided by the reconciliation results. In addition, tribal representatives said that the investment analyses did not reflect uninvested funds associated with deposit lag times. They were concerned that unearned interest associated with deposit lag times between BIA’s receipt of funds and its deposit of the funds in a Treasury-designated federal depository bank could be significant. The Reconciliation Project Manager explained that while an audit could not be performed due to the number of missing records, the reconciliation contractor performed agreed-upon procedures to attempt to verify account balances. He said that the results of the procedures performed were presented in the auditor’s agreed-upon procedures report to each tribe, which was prepared in accordance with American Institute of Certified Public Accountants standards. OTFM’s Director said that OTFM will consider having an independent review of the reconciliation work that BIA performed. The Reconciliation Project Manager explained that the investment analysis was a review of actual investment earnings and, therefore, it did not consider the effect of undeposited receipts or whether the funds earned maximum interest for secured investments. He also explained that for the pro forma analysis, interest was calculated at the Benchmark rate for “uninvested funds” in BIA’s cash pool that earned interest at the Treasury overnight rate and comparisons were presented in the tribes’ reports for information purposes. The Reconciliation Project Manager also said that while many actual collection dates to identify the extent of the deposit lag times were not known, tribes could estimate interest amounts for the deposit lag times by using the information provided in their reconciliation reports. In October and November 1990, during discussions between ITMA and BIA on the reconciliation procedures to be performed, ITMA requested that the reconciliation contract identify deposit lag times because it believed that related unearned interest could be significant. BIA agreed to identify the lag times as a reconciliation procedure; however, BIA did not agree to propose adjustments to pay the lost interest. Because the law requires the Secretary of the Interior to invest and pay interest on tribal funds, ITMA stated that if BIA did not propose interest adjustments related to the deposit lag times, this information should be available for settlement negotiations. According to the Reconciliation Project Manager, the deposit lag times provided in the reconciliation reports can be used by tribes in any settlement discussions with the government. Tribes stated that they would need significant time to review their reconciliation reports and the supporting documents. OTFM’s Reconciliation Project Manager said that tribes could meet BIA’s April 19, 1996, time frame for submitting acknowledgement forms to BIA on their response to the reconciliation results by indicating on that form their need for more time to review their reports. According to the Reconciliation Project Manager, BIA had anticipated that tribes may need more time to review their reconciliation reports. As a result, BIA’s acknowledgement forms ask tribes to indicate (1) the need for additional time to review the reported results and account statements, (2) the account balances they accept as reconciled, and (3) the account balances they dispute. According to the Reconciliation Project Manager, OTFM had received acknowledgement forms as of April 16, 1996, from 21 of the 269 tribes that had received a report on their reconciliation results. Of these acknowledgements, 12 tribes indicated that they needed more time, 8 tribes requested individual meetings, and 1 tribe accepted the account balances as reconciled. The Reconciliation Project Manager told us that if a tribe accepts the reconciled account balances as correct before it attends a regional meeting, OTFM will follow up to ensure that the tribe’s response reflected a clear understanding of the reconciliation reports and results. Appendix II contains additional information on tribal concerns and OTFM’s responses. OTFM planned five regional meetings between March 1996 and July 1996 to serve as workshops to assist individual tribes in reviewing their reconciliation results. The Reconciliation Project Manager encouraged tribal representatives to carefully review their reconciliation reports, account statements, and the supporting documents for the basic reconciliation that BIA provided to the tribes on compact disks. He also urged the tribes to send their accountants to the regional meetings where each tribe’s representatives will be allotted time to meet with the reconciliation contractor and to ask specific questions about their tribe’s trust accounts. The regional meetings are to serve as workshops to assist tribes in understanding their reconciliation results. According to the Reconciliation Project Manager, OTFM will not be able to complete planned regional meetings with tribes on the reconciliation results until July 20, 1996. As a result, the Secretary of the Interior plans to meet the May 31, 1996, reconciliation reporting requirement in the American Indian Trust Fund Management Reform Act by providing an interim report to the House and Senate Committees by that date and a final report after the regional meetings are completed. Our past testimonies and reports anticipated that when the reconciliation was completed, there might not be agreement on reconciled account balances. Our April and May 1991 testimonies stated that it would be difficult to locate records to support the reconciliation effort and that following the reconciliation, some or all accounts might need to be settled. Our June 1992 report recommended that BIA develop a proposal for reaching a satisfactory resolution of the trust account balances with account holders. Our report also stated that the BIA reconciliation contractor’s latest cost estimate at that time for reconciling individual Indian accounts ranged from $180 million to $281 million and that because many accounts are not reconcilable, alternative approaches to reach agreement on account balances would be necessary. In March 1995, we testified that further tribal reconciliation work would not provide reasonable assurance that the account balances are accurate and that the time had come for the Congress to consider legislating a settlement process that could include both tribal and individual Indian accounts. Following our March 1995 testimony, your Committee and the House Committee on Resources, Subcommittee on Native American and Insular Affairs, asked us to prepare, for discussion purposes, draft legislation to establish a settlement process. We issued this draft legislation in September 1995. Reports and testimonies related to our work are listed at the end of this report. Although OTFM made a massive attempt to reconcile tribal accounts, missing records and systems limitations made a full reconciliation impossible. Because BIA does not know the universe of transactions or leases, it does not know the total amount of receipts and disbursements that should have been recorded. Tribes have raised a number of concerns about the adequacy and reliability of the reconciliation results. If follow-up meetings with tribes do not resolve these concerns, the settlement process which we have previously recommended could be used as a framework for resolving disagreements on account balances. In addition, due to cost considerations and the potential lack of supporting documentation, reconciliations for individual Indian accounts were not performed, and no alternative procedures were developed to verify these account balances. Since any attempt to reconcile these accounts would be costly and the results would be limited, these accounts could be included in the settlement process. The Interior Department’s comments consisted primarily of numerous technical clarifications, which we incorporated where appropriate. The comments neither agreed nor disagreed with our overall message and conclusion that the accounts could not be fully reconciled and that a settlement process could provide a useful framework for resolving disagreements about account balances. However, BIA disagreed with our position that limitations in reconciliation scope and methodologies needed to be disclosed to provide useful information on the completeness of the reconciliation results. The reconciliation requirement as legislated by the Congress was to reconcile the accounts to the earliest possible date and ensure, through independent certification, that the reconciliation was as complete as possible. Further, the Congress, in the American Indian Trust Fund Management Reform Act, required BIA’s report to include a description of the reconciliation methodology and the account holder’s conclusion as to whether the reconciliation represents as full and complete an accounting of its funds as possible. Therefore, in order for the tribes and the Congress to understand the reconciliation results and determine whether the reconciliation represents as full and complete an accounting as possible, it was important that BIA explain the limitations in reconciliation scope and procedures, including procedures that were not performed or were not completed. Our report addresses several areas where our work identified significant reconciliation limitations and changes in procedures and methodologies that we believe should have been disclosed by BIA. These areas include the lack of a known universe of transactions and leases, the use of issue papers to approve changes in reconciliation scope and procedures due to unforeseen circumstances, and reconciliation procedures that could not be completed or were not performed. This additional information provides an important context for understanding the reconciliation results. We are sending copies of this letter to the House Committee on Resources; the Secretary of the Interior; the Special Trustee for American Indians; the Assistant Secretary, Indian Affairs; the Director of the Office of Management and Budget; and other interested parties. Please contact me at (202) 512-9508 if you or your staff have any questions concerning this report. Appendix III lists major contributors to this report. The reconciliation effort was to cover reconstruction of trust fund account activity, to the extent that records were available, using eight major reconciliation procedures. Due to missing records, the lack of an audit trail in BIA’s systems, and cost and time constraints, not all reconciliation procedures could be completed and some procedures were not performed. BIA’s reconciliation contractor performed reconciliation procedures for fiscal years 1973 through 1992. To meet the requirement in the American Indian Trust Fund Management Reform Act of 1994 that the reconciliation reports include the results of reconciliations through September 30, 1995, the reconciliation report packages provided to the tribes include the results of reconciliations performed by BIA for fiscal years 1993 through 1995. The report packages also include the results of reconciliations that BIA performed between the investment system and the Finance System (general ledger) for 26 tribes. The following summary addresses the reconciliation procedures that were performed by the contractor and those that could not be performed or were not completed. The six major reconciliation procedures that were performed covered (1) transactions, (2) investment yields, (3) deposit lag times, (4) selected systems, (5) special procedures for five tribes, and (6) lease receipts. This segment of the reconciliation included tracing 251,432 in total recorded noninvestment receipt and disbursement transactions from the general ledger to source documents, such as deposit tickets, disbursement vouchers, and journal vouchers. OTFM’s reconciliation contractor reported that $15.3 billion, or 86 percent, of the total $17.7 billion in noninvestment transactions for fiscal years 1973 through 1992 had been verified. According to OTFM’s Reconciliation Project Manager, noninvestment transactions for 83 tribes were fully reconciled under this procedure and, for the transactions reconciled, BIA identified a probable error rate of only .01 percent. Where errors were identified, adjustments were proposed. Due to missing records, 32,901 of the noninvestment transactions totaling $2.4 billion (gross) could not be reconciled. According to Interior and OTFM documents, the $2.4 billion included the following transactions which could not be traced to supporting documentation: $1.1 billion in receipts credited to tribal accounts that earned interest; $.8 billion in tribal drawdowns (disbursements) of their account balances, refunds, and canceled checks; and $.5 billion in internal transfers between the same tribe’s accounts. In addition, BIA was not able to determine the total amount of receipts and disbursements that should have been recorded. Therefore, the reconciliation project focused on transactions that were posted to BIA’s general ledger for tribal accounts and no reconciliation procedure was performed to address the completeness of the accounting records. Further, the reconciliation report states that BIA, based on its institutional knowledge, did not accept all adjustments proposed by the reconciliation contractor. BIA’s contractor also reconciled $21.3 billion, or 16 percent, of the recorded investment transactions as part of the basic reconciliation process. According to BIA’s Reconciliation Project Manager, in order to achieve efficiencies, BIA decided to terminate the detailed transaction reconciliations. Instead, BIA asked its reconciliation contractor to verify investment transactions by performing procedures to review investment yields rather than testing individual transactions. BIA’s contractor also identified deposit lag times for BIA collections and reconciled investment systems balances. This segment of the reconciliation included an investment yield analysis to compare tribes’ interest earnings to the BIA benchmark rate, which was the annual average yield for all tribal funds invested. Any account’s annual yield that was at least 2 percentage points below or 5 percentage points or more above the annual benchmark was investigated for errors. BIA’s contractor also recalculated interest earnings on tribal investments in overnight Treasury deposits and compared interest received by tribes to the applicable Treasury rate. As a result of research on variations from the benchmark parameters and the historical Treasury interest rates, adjustments were proposed. In addition to the yield analysis and Treasury interest analysis, BIA’s contractor performed a pro forma analysis to estimate what might have been earned had “uninvested funds” (funds in BIA’s cash pool that earned interest at the Treasury overnight rate) yielded returns comparable to the benchmark rates. The results of this procedure were provided for informational purposes and no adjustments were proposed. Deposit lag times represent the number of days from the date funds were received by BIA to the date that the funds were deposited in a Treasury-designated federal depository bank. Because the date that the collections were received by BIA’s various offices was not always clearly documented on the receipt documents, BIA established a hierarchy for determining surrogate collection dates. For example, if the receipt date did not appear on the collection voucher, the established hierarchy of surrogate dates was as follows—the most recent date on the collection voucher subsequent to the date on the payment check received, the stamped date that the voucher was processed, the date that the voucher was prepared, and the date that the voucher was approved. The reconciliation report showed that transactions analyzed for lag times for the 20 years covered by the reconciliation totaled about $3.2 billion. These funds were deposited between the established collection date and 30 days or more following the established collection date. The lag time information was provided for information purposes. No interest calculations were reported and no adjustments were proposed for interest lost as a result of deposit lag times. As stated earlier, ITMA requested that BIA present this information in the reconciliation reports. The systems reconciliation was to include reconciling (1) information in BIA’s trust fund investment system to its general ledger in BIA’s Finance System, (2) BIA’s tribal general ledger in the Finance System to U.S. Treasury records, and (3) BIA’s Integrated Records Management System (IRMS) to Finance System. The IRMS to Finance System reconciliation was not performed and is discussed in the next section of this appendix. The investment system to Finance System reconciliation covered investment balances as of September 30, 1992. BIA performed the reconciliations for 26 tribes and proposed adjustments totaling nearly $1.9 million. BIA’s contractor’s reconciliation report disclosed the procedures that BIA had performed. To support the reconciliation of its tribal general ledger transactions in BIA’s Finance System to Treasury reported transactions, OTFM provided available tribal Treasury reports (SF-224, Statement of Transactions Reports) for fiscal years 1990 through 1992 to the reconciliation contractor. BIA’s contractor completed the fiscal year 1992 reconciliation and included the results in BIA’s January 1996 report package to tribes. However, BIA’s reconciliation contractor was not able to complete the fiscal years 1990 and 1991 Finance System reconciliations in time to include them in the January 1996 report package due to differences in the way that BIA and Treasury summarize the tribal trust account activity, which made the reconciliation between their systems difficult. For example, BIA’s SF-224, Statement of Transactions Report to Treasury, did not provide sufficient detail to distinguish tribal accounts from other fund accounts. As a result, tremendous effort was needed to reconstruct tribal account transactions from the source documents for fiscal years prior to 1992. According to BIA’s Reconciliation Project Manager, a supplemental report on the fiscal years 1990 and 1991 Finance System reconciliations is being finalized for distribution to each tribe. BIA’s contractor proposed adjustments to BIA’s general ledger and also proposed reporting corrections to Treasury for variances where supporting documentation was available. No adjustments were proposed where supporting documentation could not be located. This effort was designed to perform agreed-upon procedures on an accelerated, pilot basis to identify potential problem areas. Five tribesagreed to participate in the special procedures review. The purpose of this work was to determine the workability of the procedures; however, as specified in the reconciliation contract, this work was to be performed simultaneously with other reconciliation work. BIA prepared a Memorandum of Understanding (MOU) for each tribe to cover both standard and special procedures. Our review of the approved MOUs for each of the five tribes showed that their special procedures generally covered timeliness of payments and deposits, internal control reviews, and special deposit accounts. The MOUs also covered specific areas of concern to each tribe, such as a detailed analysis of certain accounts. We did not review the reconciliation reports provided to these tribes. These procedures included verifying tribal income by tracing general ledger postings to the original source documents, including leases, sales agreements, and production reports. Receipts tested covered oil, gas, and coal royalties; timber sales; other surface leases, such as business leases; and grazing, hunting, fishing, and rights of way. Samples tested were generally selected based on the availability of supporting documentation. The BIA reconciliation contractor’s analysis of the general ledger information showed that 9 percent of the leases represented 95 percent of recorded lease revenues. Based on this analysis, the contract called for a review of all leases greater than $5,000 and a test sample of 100 additional leases of less than $5,000 on a cross section of tribes. The reconciliation contractor globally identified 6,446 surface leases with annual collections of over $5,000. However, due to time constraints for completing the reconciliation, 1,399 leases with collections greater than $25,000 were identified for testing, of which OTFM located 755 lease files. Of the lease files located, 692 leases were tested. Because of missing records, a number of leases and sample test months were substituted for those in the original sample. BIA’s reconciliation contractor reported that 99 percent of the lease receipts tested were verified. The leases tested represent 10.7 percent of the leases known to have annual collections greater than $5,000 and about one half of the leases known to have collections greater than $25,000. In addition, the reconciliation contractor judgmentally selected and tested 227 sample months for 213 timber sales contracts for five tribes with significant timber receipts and oil and mineral receipts for one tribe. BIA’s reconciliation contractor reported that 99.7 percent of the timber receipts tested were verified and 93.9 percent of the oil and mineral receipts tested were verified. Overall, BIA’s contractor reported that 98.7 percent of the lease revenues tested were reconciled. Not all reconciliation procedures that were specified in BIA’s initial reconciliation contract could be performed or completed due to missing records and time and cost constraints associated with the need to locate and trace numerous manual records. However, BIA’s transmittal letter to tribes did not disclose the inability to complete these procedures. Reconciliation procedures that could not be performed or completed covered (1) reconciling the IRMS (subsidiary system) to the Finance System (general ledger) and reconciling the Finance System to Treasury transactions for fiscal years prior to 1990, (2) verifying balances in tribal IIM and special deposit accounts, (3) verifying Minerals Management Service (MMS) royalty collections, and (4) reconciling accounts of individual Indians. While BIA officials told us that the IRMS reconciliation was not performed due to time and funding limitations, we believe that even without those limitations, the lack of an audit trail in the IRMS system—including the lack of distribution tables to support disbursements—would have prevented reconciliation of tribal IIM and special deposit accounts. It also would have prevented or severely limited IRMS to Finance System reconciliations. In addition, the Finance System was not reconciled to Treasury for fiscal years prior to 1990. This initiative was to include exploratory work on the reconciliation of tribal IIM and special deposit accounts for the five tribes that participated in the special procedures pilot work. Tribal IIM accounts maintained in the IRMS system were to be reconciled to the source documents and tribal special deposit accounts were to be reconciled from the source documents that moved the funds to the tribes’ general ledger accounts. Due to missing records and the lack of an audit trail through the IRMS system, BIA determined that tribal transactions could not be efficiently isolated from individual Indian transactions. According to OTFM’s Reconciliation Project Manager, the special deposit account work for each of the five tribes was completed and the results were included in their reconciliation reports. However, special deposit account reconciliations related to leases were not performed because of a change in BIA’s method for selecting leases, which excluded leases with multiple owners for which payments could not be identified to each owner. These procedures were requested by ITMA to fill the gap between the posting of collection transactions and the leases in order to determine whether MMS Indian royalty accounting data transferred to BIA were reliable. The initial work was to include a review of MMS procedures and documents in order to evaluate the feasibility and level of effort needed to perform detailed fill-the-gap work for MMS receipts and to recommend test procedures. Because MMS retained records for only 6 years, records for most of the 20-year reconciliation period were not available. As a result, BIA asked its reconciliation contractor to recommend procedures to verify that MMS followed its royalty collection and accounting procedures. However, the procedures proposed by BIA’s contractor would not have traced collections from the leases to the general ledger. The verification of MMS’ procedures, which was to be performed in fiscal year 1996, was not performed because the reconciliation project was brought to a close as of September 30, 1995. Our June 1992 report stated that many of the approximately 300,000 IIM accounts were not reconcilable due to missing records and the cost of reconciling a large number of accounts with small balances. BIA’s reconciliation contractor initially estimated a cost ranging from $211 million to nearly $400 million. A subsequent scope reduction decreased the estimate to between $180 million and $281 million, which was about one-half of the reported $440 million balance of the IIM accounts as of September 30, 1991. BIA’s reconciliation contract did not include IIM accounts. In our June 1992 report, we recommended that BIA consider alternative approaches to reach agreement on IIM account balances, such as negotiating agreements with account holders. In 1991, BIA established a work group to develop IIM reconciliation approaches and alternatives. In 1995, the work group identified a number of reconciliation and policy questions for presentation to BIA and Interior management, including statistical sampling, using dollar ceilings, reconciling for time periods where records are available, and sending account statements to account holders for them to confirm or question the balances. However, as of March 1, 1996, no decision had been made on workable IIM account reconciliation alternatives. At BIA’s February 1996, National Meeting to explain reconciliation reports and results, tribes raised a number of concerns, including the (1) adequacy of the objectives and scope of the reconciliation project, (2) effect of missing documents on the accuracy of the reconciled account balances, and (3) thoroughness of procedures used for testing the accuracy of recorded investment interest income. The following discussion highlights the tribes’ concerns and OTFM’s responses. Tribal concerns about the reconciliation project’s objectives and scope included the following: the lack of an audit and how this affected the reliability of the reconciled account balances, the failure to include fraud as a reconciliation objective, the reliability of portions of the reconciliations that BIA rather than the independent contractor had performed and adjustments that BIA had proposed, and the fact that the effort seemed to consist mainly of a reconciliation of BIA accounts with BIA-generated documents. In response to these concerns, OTFM and reconciliation contractor officials explained the following: The accounts could not be audited due to missing records and, as a result, the reconciliation consisted of agreed-upon procedures to verify account balances to the extent practicable. While detection of fraud was not a reconciliation objective, no instances of fraud were identified by the reconciliation contractor. BIA had reconciled investment system data for several years before the reconciliation effort began and that BIA did not believe that it was cost-effective to repeat this work. Because the American Indian Trust Fund Management Reform Act of 1994 required that BIA provide tribes with reconciled accounts statements as of September 30, 1995, the statements include the results of reconciliation procedures performed by BIA’s contractor for fiscal years 1973 through 1992, and the results of OTFM’s systems reconciliations for fiscal years 1993 through 1995. OTFM will consider having an independent auditor review the results of the procedures performed and adjustments proposed by BIA. Tribal authorizations for withdrawals of trust funds and Treasury receipt and disbursement documentation were reviewed during the reconciliation. Tribal representatives pointed out that the reconciliation report stated that missing documents had prevented the reconciliation of almost 33,000 general ledger transactions totaling $2.4 billion (gross) and many of the leases selected for testing. They raised concerns about the assurance provided by the reported results, including the following: The large amount of unreconciled transactions may have impacted the validity of the reported reconciliation results. The methodology provided no assurance that all transactions were recorded in the general ledger. Because BIA had no comprehensive database for leases and no accounts receivable system, it had no way of determining the universe of leases or the amounts of lease revenue expected to be collected during a given period. The small judgmental sample of leases tested may not be representative of the universe of receipt transactions. The fill-the-gap procedures, which attempted to trace receipts from the general ledger to the leases or other land-use agreements, were not designed to find leases that were not already known to exist. Proposed adjustments that showed amounts owed by tribes on lease receipts may have resulted from overpayments by companies which may have been corrected in subsequent periods that were not reviewed by the reconciliation contractor. OTFM’s Reconciliation Project Manager told tribal representatives that despite time and money constraints, the government had made a good-faith effort to reconcile the tribal accounts and that BIA had identified a low error rate for the transactions that could be reconciled. The Reconciliation Project Manager and contractor officials explained the following: BIA does not know the universe of leases and the general ledger was the starting point for both the basic transaction reconciliations and the lease receipt testings. In some instances, the reconciliation contractor was able to verify lease receipts against lease documents and trace them to the general ledger. Judgmentally selected sample test months for about 10 percent of the total leases originally identified for testing were tested. It was possible that for lease overpayments, subsequent adjustments were made that were not reviewed by the reconciliation contractor. Another area of concern to tribes was the investment analysis. This task included certain analytical procedures and interest yield analyses for investment in Treasury securities and other investments. Tribes expressed the following concerns: Invested funds may not have earned maximum interest. The yield analyses would not reflect unearned interest on uninvested amounts due to deposit lag times—the time that elapsed between BIA’s various offices’ receipt of lease revenues and the time the funds were invested. The actual lag times could not be determined due to missing records and the dates used in the lag time calculations could have been several days after the actual collection date. The 30-day category included lag times of over 30 days. Unearned interest resulting from deposit lag times could be significant. OTFM’s Reconciliation Project Manager provided the following clarifications. BIA invested funds in government securities or collateralized accounts, as required. The yield analysis did not reflect undeposited amounts due to lag times. Priorities were established for determining collection dates. The zero lag time category generally represented the actual collection dates. Although the 30-day category included lag times of over 30 days, tribes could, for the most part, calculate the interest related to lag times by using the information in their reconciliation reports. Indian Trust Fund Settlement Legislation (GAO/AIMD/OGC-95-237R, September 29, 1995). Financial Management: Indian Trust Fund Accounts Cannot Be Fully Reconciled (GAO/T-AIMD-95-94, March 8, 1995). Financial Management: Native American Trust Fund Management Reform Legislation (GAO/T-AIMD-94-174, August 11, 1994). BIA Reconciliation Recommendations (GAO/AIMD-94-138R, June 10, 1994). Financial Management: Status of BIA’s Efforts to Reconcile Indian Trust Fund Accounts and Implement Management Improvements (GAO/T-AIMD-94-99, April 12, 1994). Financial Management: BIA’s Management of the Indian Trust Funds (GAO/T-AIMD-93-4, September 27, 1993). Financial Management: Creation of Bureau of Indian Affairs’ Trust Fund Special Projects Team (GAO/AIMD-93-74, September 21, 1993). Financial Management: Status of BIA’s Efforts to Resolve Long-Standing Trust Fund Management Problems (GAO/T-AFMD-93-8, June 22, 1993). BIA Appropriation Language (on Tolling the Statute of Limitations on Certain Indian Claims) (GAO/AFMD-93-84R, June 4, 1993). Financial Management: Status of BIA’s Efforts to Resolve Long-Standing Trust Fund Management Problems (GAO/T-AFMD-92-16, August 12, 1992). Indian Issues: GAO’s Analysis of Land Ownership at 12 Reservations (GAO/T-RCED-92-75, July 2, 1992). Financial Management: Problems Affecting BIA Trust Fund Financial Management (GAO/T-AFMD-92-12, July 2, 1992). Financial Management: BIA Has Made Limited Progress in Reconciling Trust Accounts and Developing a Strategic Plan (GAO/AFMD-92-38, June 18, 1992). Financial Management: BIA Has Made Limited Progress in Reconciling Indian Trust Fund Accounts and Developing a Strategic Plan (GAO/T-AFMD-92-6, April 2, 1992). Indian Programs: Profile of Land Ownership at 12 Reservations (GAO/RCED-92-96BR, February 10, 1992). BIA Reconciliation Monitoring (GAO/AFMD-92-36R, January 13, 1992). Responses to Follow-up Questions Following the May 20, 1991 Oversight Hearing on BIA’s Trust Fund Financial Management (B-243843.2, June 5, 1991). Bureau of Indian Affairs’ Efforts to Reconcile, Audit, and Manage the Indian Trust Funds (GAO/T-AFMD-91-6, May 20, 1991). Bureau of Indian Affairs’ Efforts to Reconcile and Audit the Indian Trust Funds (GAO/T-AFMD-91-2, April 11, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Bureau of Indian Affairs' (BIA) efforts to reconcile and certify tribal trust fund accounts. GAO found that: (1) BIA has spent over $21 million over 5 years in its attempts to reconcile and certify Indian trust fund accounts; (2) BIA and its reconciliation and certification contractors modified their contracts and procedures numerous times because of missing records, lack of an audit trail, and cost and time constraints; (3) in January 1996, BIA provided each tribe with a report package on the results of the reconciliation procedures performed; (4) BIA did not fully disclose in the report which procedures specified in the reconciliation contract were not performed, scope limitations, or changes in methodologies for procedures that were performed; (5) BIA issued reports to 112 tribes regarding their portions of multitribe judgment awards for claims against the federal government, but may not issue reports to all tribes involved because some are no longer recognized by the federal government and it may not be able to locate the tribes or their descendants; and (6) at a February 1996 meeting with BIA, tribal representatives expressed concerns about the adequacy of the reconciliation objectives and scope, the effect of missing documents on account balance accuracy, and the thoroughness of procedures for accuracy testing.
Different federal agencies have jurisdiction to control various types of nuclear-related exports; DOE regulates the export of nuclear technology through Part 810, which implements Section 57(b) of the Atomic Energy Act; DOE and the Department of Justice (DOJ) have roles in enforcing Part 810. Four federal agencies share jurisdiction over nuclear-related exports, with different agencies regulating different types of such exports. DOE regulates exports of commercial nuclear technology and assistance through its National Nuclear Security Administration’s (NNSA) Office of Nonproliferation and International Security. The Nuclear Regulatory Commission (NRC) regulates exports of commercial nuclear materials and equipment through its Office of International Programs (OIP). Nuclear materials and equipment include certain enriched uranium and reactor components, respectively. The Department of Commerce (Commerce) regulates dual-use items—those that can be used for both commercial and military applications—and certain military items. Nuclear dual-use items include, among other things, turbines, generators, and machine tools. The Department of State (State) regulates munitions items and technologies—those designed, developed, configured, adapted, or modified solely for military applications. Table 1 provides additional detail on the U.S. nuclear export control regime. DOE regulates exports of commercial nuclear technology and assistance under section 57(b) of the Atomic Energy Act (AEA), which governs development or production of special nuclear material outside the United States. DOE implements section 57(b) through the regulations at 10 C.F.R. Part 810. Part 810 applies to commercial activities because nuclear reactors fueled with uranium also produce plutonium. Section 57(b) of the AEA requires establishment of orderly and expeditious procedures, to include, among other things, explicit direction on the handling of requests to engage or participate in development or production of special nuclear material outside of the United States and express deadlines for soliciting and collecting the views of the other agencies (with identified officials responsible for meeting such deadlines). Activities generally authorized under Part 810 do not require prior application to or notification of the Secretary of Energy, although companies must report certain information about such activities to DOE within 30 days. Applications for specific authorization, which must be approved by the Secretary of Energy, undergo a three-stage review process, as depicted in figure 1. In the first or “initial” review stage, NNSA prepares an analysis of each application. In the second or “interagency” review stage, NNSA provides the application to State for concurrence and to the NRC, Commerce, and the Department of Defense (DOD) for consultation. DOE’s target time frames for completion of the initial and interagency review stages are 30 days each. DOE does not have a target time frame for completion of the third or “final” review stage, in which NNSA and DOE staff conduct a final review of the application and make a recommendation to the Secretary, who then makes a determination as to whether the proposed export would be inimical to the national interest. DOE does, however, have an interim target within the final review stage for providing a recommendation to the Secretary. Specifically, NNSA’s procedures for processing, reviewing, and approving specific authorizations state that the Secretary is to be provided with a recommendation no later than 30 days following receipt of the interagency concurrence or views, or 60 days in the event of interagency disagreements. Both DOE and DOJ have a role in the enforcement of Part 810. DOE may act to correct deficiencies in applications or processes, or obtain an injunction or restraining order to prevent violation of Part 810, and may refer suspected criminal violations to DOJ for investigation and possible prosecution under the AEA. Any person convicted of violating, conspiring, or attempting to violate Section 57 of the AEA; or of willfully falsifying, concealing, or covering up a material fact or making false, fictitious or fraudulent statements or representations may be fined or imprisoned or both. Further, under Part 810, if a violation of the AEA is committed with intent to injure the United States or to aid a foreign nation, the penalty could be up to life imprisonment and a $20,000 fine. DOE has not yet determined whether it has legal authority to apply civil penalties for violation of Part 810. DOE monitors compliance with Part 810 in part through reports that exporters are required to submit on authorized activities. The United States has pledged to adhere to a set of guidelines that include export licensing regulations, enforcement procedures, and penalties for violations. These guidelines, developed by nuclear supplier countries (the Nuclear Suppliers Group ), aim to ensure that trade in civilian nuclear technologies does not contribute to nuclear proliferation. The NSG was established in 1975, and since 1978, it has published guidelines which cover transfers of nuclear and nuclear-related dual use equipment, material, software, and related technology. These guidelines lay out principles for the members to apply, in accordance with their national requirements. All NSG members, including the United States, have pledged to put in place legal measures to ensure the effective implementation of the NSG guidelines. From 2008 through 2013, DOE consistently missed its 30-day targets to complete the initial and interagency stages of the Part 810 review process. Specifically, during this period, DOE missed the target to complete the initial stage of review for 80 of the 89 applications processed. Similarly, interagency review times missed DOE’s target for 85 of the applications. The third stage, for which DOE has not established a target time frame, had the longest median review times. (See fig. 2). U.S. nuclear exporters said that the lengthy and unpredictable Part 810 time frames can impose business risks. We found that DOE missed its target to complete initial review for 90 percent (80 of 89) of the applications for specific authorization approved from 2008 through 2013 (no application was denied during this period). DOE took a median of 71 days to complete the initial review stage, with DOE’s longest initial review taking 1,035 days. In this stage, NNSA prepares an analysis of each application based on a set of technical and other reviews by experts in DOE’s National Laboratories, its Office of Nuclear Energy (NE) and Office of General Counsel, and NNSA’s Office of General Counsel. NNSA considers eight factors, including whether the United States has an agreement for nuclear cooperation with the nation or group of nations involved; whether the country is a party to the Treaty on the Non-Proliferation of Nuclear Weapons (NPT); and whether the country involved has entered into an agreement with the International Atomic Energy Agency (IAEA) for the application of safeguards on all its peaceful nuclear activities. For the two applications that we reviewed based on their initial review times—one that met the 30-day initial review target, and one that did not—the nature of the proposed export affected the initial review times. Specifically, an application for a U.S. company to provide assistance to the United Arab Emirates’ (UAE) nuclear regulatory body had a 29-day initial review stage (meeting the 30-day target), because, according to DOE officials, the details of the application matched those of another recently submitted application. As a result, DOE’s internal review of this application could leverage the work completed for the preceding application. The application that missed the target, taking 186 days for initial review, was for the export of mixed oxide (MOX) fuel fabrication technology to the United Kingdom. According to DOE officials, MOX is a sensitive technology, which requires greater scrutiny. Interagency review times missed DOE’s 30-day target for 85 of the 89 applications approved from 2008 through 2013. The interagency review stage was the second longest in the process, with a median review time of 105 days. Ten applications took more than a year for interagency review. As noted earlier, in this stage, DOE seeks concurrence from State, and consults Commerce, NRC, and DOD. These agencies have 30 days to provide comments or concurrence, including any conditions they would place on the authorization. State took the longest among the agencies to provide its comments or concurrence. State’s median review time—86 days—was nearly three times longer than DOE’s 30-day target. According to DOE and State officials, State’s concurrence times depend on, among other things, the responsiveness of the importing country in providing assurances of peaceful use and no re-export without U.S. government consent. For example, NNSA sent an application package for interagency review in April 2009 asking for responses within 30 days for the export of a computer program to a Chinese university for teaching and research purposes. State concurred in January 2011, about 2 weeks after receiving the foreign government assurance and almost 2 years after receiving the letter from NNSA. Agency officials attributed the 645-day interagency review period to delays in obtaining assurances from the Chinese government. State officials, who obtain assurances through embassy staff, told us they have not established a time frame for the embassies to respond, but they noted that it is rare for embassy staff not to follow up on an assurance request expeditiously. Embassy staff, who receive instructions and background documents from State headquarters, often work to make sure that the facts listed in the request for assurances are correct and that the staff have current information for points of contact for the importer, which is a key step in the assurance process. State officials recognized the need to streamline the process for obtaining assurances in countries with growing nuclear markets, such as China and the UAE. Of the 89 applications DOE approved from 2008 through 2013, 23 were for exports to the UAE—more than any other country—largely for U.S. persons to provide expertise to the UAE’s Emirates Nuclear Energy Corporation and nuclear regulatory body. In 2010, State developed generic assurances for Part 810 authorizations to the UAE, based on an agreed-upon template, so that the language would not need separate negotiation for each application. These generic assurances confirm that the transferred technology will be used exclusively for civil nuclear power activities and not for any nuclear explosive or other military purpose and that the technology will not be retransferred outside the UAE without prior U.S. consent. State officials said they would seek to streamline the assurance process in other countries where needed, based on growth in their nuclear industries, which drives the number of requests for assurances. Foreign government assurance times are not a factor in interagency review times in the cases of deemed exports—foreign nationals who access nuclear technology subject to Part 810 in the United States— because, in these cases, DOE requires U.S. employers of the foreign nationals to obtain written nonproliferation assurances from the employees rather than from the foreign government. However, the median interagency review time for such cases—46 days—still exceeded DOE’s target of 30 days. Notably, the longest interagency review of 810 days was for a deemed export. The 46-day median interagency review time for deemed export applications was shorter than that for other export applications—126 days. See figure 3 for interagency review times for deemed and all other exports. In some cases, foreign policy considerations affect interagency review times. For example, a U.S. government hold on civil nuclear cooperation with Russia following its 2008 military actions in Georgia accounted for a large part of the 840-day interagency review for an application to export nuclear fuel specifications to Russia. The application, submitted in January 2008, had reached the final review stage in August 2008, when NNSA held it in abeyance because of Russia’s actions. Following the signing of the New START Treaty in April 2010 and resubmission of the U.S.-Russia nuclear cooperation agreement to Congress in May 2010, nuclear cooperation with Russia resumed, and NNSA requested that interagency reviewers resubmit their views on the application from 2008 as soon as possible and emphasized the need for promptness. Interagency review times vary among countries and within the same country. For example, among the three countries with the most Part 810 applications (excluding deemed exports)—UAE, China, and Russia— interagency review times for exports to the UAE ranged from 27 to 344 days; review times for exports to China ranged from 46 days to 749 days; and review times for exports to Russia ranged from 35 to 840 days. Our analysis found that the final review stage, for which DOE has not established comprehensive targets, had the longest median processing times—125 days—with seven applications taking more than a year for final review and approval. In the final stage, NNSA’s and DOE’s Offices of General Counsel and DOE’s Office of Nuclear Energy review the applications, and NNSA prepares a package of materials for the Secretary’s determination. According to DOE’s procedures, the Secretary is to be provided with a recommendation no later than 30 days following receipt of the interagency concurrence or views (or 60 days in the event of interagency disagreements). The Secretary of Energy reviews the package to determine whether the activities covered by the Part 810 application will not be inimical to the interest of the United States. Under the AEA, the Secretary may not delegate the determination. We found variability in final-stage processing times across countries. For example, for the three countries with the most Part 810 applications (excluding deemed exports)—UAE, China, and Russia—median completion times for final review ranged from 100 days for the UAE to 127 days for China. Final review times for applications for exports to the same country also varied. For example, the shortest final review time for an export to Russia was 35 days and the longest was 194 days; final review times for exports to the UAE ranged from 31 days to 197 days (excluding deemed exports). The longest final review took 921 days, for a deemed export. A variety of factors contributed to the duration of final review. For an application to transfer controlled technology to Indian nationals working at a U.S. nuclear facility, DOE’s Office of General Counsel’s concerns about the application package contributed to DOE’s final review times of 241 days. According to NNSA officials, this application was 1 of 10 delayed for this reason. Once DOE General Counsel completed its revisions, NNSA sent a memorandum to the Secretary recommending approval of the application, which the Secretary granted within 2 weeks. In the final review stage, conditions that agencies imposed as part of their concurrence may also affect review times. In one case, DOD placed restrictions on foreign nationals’ access to information and facilities as a condition of its concurrence, in August 2010. Pending reconsideration of these conditions, NNSA held this application in abeyance for 10 months, starting in December 2010, contributing to a final review stage of 634 days, out of a total processing time of 824 days for the application. The following October, DOD concurred with the application without conditions after reviewing the background of the foreign nationals and the DOE staff analysis that determined that transfer of the technology would be appropriate and would not pose a risk to the facility where they would be employed. NNSA has a 30-day target (60 days in the case of interagency disagreement) within the final review stage for providing a recommendation to the Secretary, but does not track the dates that it provides these recommendations. For the 10 applications for which we could determine the date that NNSA provided a recommendation to the Secretary, DOE exceeded the 30-day target for 9 applications. Of these, 2 were held in abeyance as described above. Table 2 provides a comparison of processing times for each stage in the process and shows shortest and longest review at each stage. DOE’s targets are not comprehensive, as DOE has not established targets for the entire third stage of the Part 810 process, or for overall processing time. By comparison, NRC has established targets—which are part of its performance metrics—for processing export licenses. We have identified measurable, numerical targets as key attributes of successful performance measures. Furthermore, the rate at which DOE has missed its targets calls into question whether these targets are realistic and achievable. According to a 2007 executive order on improving government program performance, program goals should be sufficiently aggressive but realistic in light of authority and resources assigned. Without measurable and realistic targets, DOE cannot determine whether its Part 810 process is meeting its goal of efficient regulation, which includes timeliness. Realistic targets could also further DOE’s goal of efficient regulation— another aspect of which is predictability—by giving exporters a sense of how long the application process may take. According to some nuclear exporters, the lengthy and unpredictable specific authorization process affects the competitiveness and hiring practices of their companies and universities. One company, in its comments on DOE’s proposed changes to Part 810, noted that the Part 810 process is unpredictable, and that predictability is important for business planning. An industry organization representative we spoke to also emphasized the importance of predictability, stating that nuclear companies understand that nuclear matters may take a long time but that it is important to know how long things may take. DOE, in its preamble to the proposed changes to Part 810, acknowledged nuclear exporters’ concerns that the time frame for processing specific authorizations can impose business risks for companies. In comments on DOE’s proposed changes to Part 810, the Nuclear Energy Institute (NEI), an industry group that represents hundreds of nuclear companies, wrote in November 2013 that the specific authorization process was a cause of delay and uncertainty, and a distinct disadvantage, for U.S. exporters. Representatives of one U.S. nuclear exporter told us that Chinese clients had advised the company against submitting a bid if it would require a specific authorization. According to a representative from a second company, the delays in obtaining a Part 810 authorization inhibit the demonstration and deployment of reactor technology. According to industry representatives and university officers, Part 810 processing times may also delay or restrict the work or study of foreign nationals in the United States. The Ad-Hoc Utilities Group, another industry group, described the “two equally unsatisfactory alternatives” where (1) companies can either delay hiring foreign nationals or (2) hire them but limit the scope of their work functions until approval of the authorization. For example, a representative from a nuclear company told us that an engineer from India employed at a U.S. nuclear plant was unable to carry out the full scope of duties without a specific authorization, which took 14 months to process. The engineer left the job before the authorization was granted. According to the Ad-Hoc Utilities Group, it is impractical for nuclear power operators to offer a foreign national a job that depends on a specific authorization that can take a year to obtain. The group added that Part 810 hinders utilities from hiring qualified foreign employees for positions that require access to certain nuclear- related materials. As a result, the group wrote, Part 810 can deter the hiring of workers who can safely operate nuclear power plants. In addition, a university officer whose institute offers a nuclear science and engineering program told us that Part 810 imposes a barrier for U.S. universities in recruiting faculty and students that the universities’ foreign competitors do not face. DOE has begun efforts to reduce processing times of Part 810 applications. For example, NNSA officials said they plan to build an e- licensing system for the Part 810 process and are finalizing the details regarding the functionality of such a system. The e-licensing system would track applications as they proceed through the authorization process, allowing NNSA to monitor its performance in processing them. NNSA officials said that the e-licensing system would improve predictability by allowing applicants to track their applications throughout the process, including the interagency review. NNSA is also working to become compliant with the International Standards Organization (ISO)- 9001, a quality management standard, and the Part 810 process is part of this initiative. NNSA officials told us that the agency has completed the initial interview phase of the ISO certification process, as well as the Lean Six Sigma process, but NNSA’s time frame for becoming compliant with the ISO-9001 standard is unclear. Part 810 is unclear with regard to the activities it covers, among other things. DOE has not provided written guidance to help exporters interpret the scope of the regulation; instead, DOE encourages exporters to inquire with DOE officials for interpretation. DOE cannot reasonably assure that its responses to inquiries are consistent, however, because DOE officials do not routinely document these inquiries or DOE’s responses. DOE has taken steps to clarify the regulation and is planning to develop guidance. Part 810 is unclear with regard to the scope of activities covered and application requirements. For example, key definitions do not make it clear which activities are subject to the regulation. This affects, among other things, how companies conduct marketing activities related to nuclear reactors. Two executive orders identify clarity and consistency among the key principles of federal regulation. We identified the following three areas regarding the lack of clarity in the regulation: Key definitions in Part 810 are broad. The regulation’s definition of “nuclear reactor” does not distinguish among reactor components based on their relative sensitivities. Representatives of nuclear exporters have said that the regulation’s definition of “nuclear reactor”—as “an apparatus, other than a nuclear explosive device, designed or used to sustain nuclear fission in a self-supporting chain reaction”—is overly broad and could be interpreted to encompass a wide variety of technologies unrelated to the production of special nuclear material. For example, NEI noted in its comments on DOE’s proposed revisions to Part 810 that nuclear reactors, under DOE’s definition, contain thousands of components and systems, only some of which, such as the reactor pressure vessel, relate to the production of special nuclear material. The group raised concerns that absent a clearer definition of the technologies covered within the scope of “nuclear reactor,” companies would be forced to seek time- consuming advisory opinions for each item in a nuclear power plant. By contrast, NRC’s export control regulations provide an illustrative list of covered nuclear reactor components, and representatives of exporters suggested in their public comments that DOE compile a similar list. Part 810 does not explicitly address sales or marketing. The regulation does not contain provisions that specifically address marketing and does not clearly delineate the types of marketing information that may require a general or specific authorization. A representative of an association for nuclear companies told us that this has created confusion and that exporters determine whether sales and marketing information is covered based on whether the information is public or proprietary. However, exporters noted in their public comments on DOE’s proposed revisions to Part 810 that marketing activities may entail the transfer of general design or price information that is proprietary but not sufficiently detailed to assist in production of special nuclear material. Nonetheless, such information may fall under the jurisdiction of Part 810 because it is not “public information,” which is generally authorized for transfer (and would be exempt under the proposed rule). A company representative told us that absent greater clarity, companies are limited in marketing a design and advancing a contract because customers request detailed information—which may be proprietary—to understand how much they would be willing to pay for a product. Representatives of another company told us that it took 2 years to get the specific authorization to disclose the information needed for a marketing activity. While DOE has proposed to adjust its definitions related to public information, it has declined to specify what marketing activities may be exempt from authorization—stating that the regulation’s applicability depends on the data transferred rather than the activity—and has instead noted in its Supplemental Notice of Proposed Rulemaking that companies can seek guidance from the department on a case-by-case basis. The regulation does not clearly specify the information and documents that applicants are required to submit. DOE’s review of Part 810 applications may be prolonged because applicants may not submit all of the information required, which may occur because it is sometimes unclear to applicants what information is required. For example, a university export control officer told us that she applied for a specific authorization for a course being developed on operating a nuclear power plant. More than 5 weeks after submitting the application, a DOE official requested additional information from the applicant—specifically, résumés for the foreign nationals involved. The university officer said that, unlike with other export control regimes, DOE does not provide guidance on the application materials necessary for Part 810, and that she would have included the résumés at the beginning of the process if she had known she needed to provide them. Section 57(b) of the AEA states that, to the extent practicable, an applicant should be advised of all the information required at the beginning of the process. NNSA officials told us that Part 810 contains the application requirements; however, the regulation does not list, for example, résumés among the requirements. The university officer said she submitted the additional information to DOE, and DOE officials informed her 4 weeks later that the activity would not require specific authorization. DOE does not provide supplemental guidance to help exporters interpret the scope and requirements of Part 810. According to an OMB bulletin, agencies increasingly have relied on guidance documents to inform the public and to provide direction to their staffs as the scope and complexity of regulatory programs have grown. According to this bulletin, guidance documents, used properly, can channel the discretion of agency employees, increase efficiency, and enhance fairness by providing the public clear notice of the line between permissible and impermissible conduct, while ensuring equal treatment of similarly situated parties. We found that the other agencies that regulate civilian nuclear exports—NRC and Commerce—do provide written guidance, such as frequently-asked- questions documents, to clarify their interpretation of the regulations to exporters. Nuclear exporters said that such guidance would be helpful for Part 810. According to one company, a more comprehensive explanation of activities that require specific authorization would afford U.S. businesses the opportunity to adequately plan for international commitments. DOE’s practice is to provide guidance on a case-by-case basis on its interpretations of various Part 810 provisions, but this guidance is provided only to the specific party and is not made public. One company wrote in its public comments on DOE’s proposed revisions to Part 810 that, rather than requiring companies to obtain advisory opinions with respect to proposed activities, DOE and the nuclear industry would benefit from DOE’s establishment of clearer boundaries for the applicability of Part 810. University export officers also said that DOE declined their request to clarify which types of university activities would require authorization, but encouraged the officers to inquire or apply so that DOE could make a case-by-case determination. These officers raised concerns that the lack of clarity in Part 810, together with the uncertainty about DOE’s decisions regarding what activities require authorization, may restrict scientific communication by creating confusion about what universities may share openly. Without established written guidance, exporters uncertain about the scope of Part 810 must inquire with DOE for interpretation. Under Part 810, potential applicants may request advice on, among other things, whether a proposed activity falls within the scope of the regulations or requires specific authorization. According to DOE’s Part 810 procedures, DOE receives numerous inquiries from U.S. persons and firms regarding activities that may fall under the scope of Part 810 regulations. NNSA officials told us they receive approximately two inquiries in the form of letters and more than 10 inquiries by phone each week. According to DOE’s Part 810 procedures, most of these inquiries are requests for interpretation of the regulation or requests for review of proposed financial ventures with foreign entities. These inquiries require the input of a wide range of expertise from various sources inside DOE and elsewhere that are consolidated into informal written or oral guidance or for formal correspondence. DOE’s responses to these inquiries are significant, because, as DOE acknowledges, the specific-authorization process can be protracted, and its approval time frames can impose business risks for U.S. companies. Several representatives of nuclear exporters told us that DOE responded promptly to inquiries, but that the need to consult DOE to clarify the scope and applicability of the regulation contributed to a process that was too dependent on individual interpretation. For example, one company representative told us that there was no way of knowing whether other companies were getting the same response—with regard to what type of authorization would be needed—for the same set of circumstances. A university export officer said that a definition provided by DOE in the course of an inquiry appeared to be “made up on the spot.” Another company representative suggested that a potential applicant could get a different answer depending on which official at DOE takes the call, based on an individual interpretation with no basis in the regulation. This representative said that DOE’s inquiry system provides companies with an incentive to proceed with the activity in question without consulting DOE. Specifically, the representative noted that an inquiry could lead to a response that the transaction could not proceed without waiting for an authorization. If the company proceeds without inquiring, however, and DOE later determines that the transaction required authorization, this representative believes that the company would be able to defend itself against any enforcement action because DOE would not be able to point to the specific regulatory language on which it based its determination. DOE officials do not consistently document inquiries or their responses, and cannot analyze them for consistency or to identify parts of the regulation that may need clarification. Part 810 does not require exporters to submit inquiries for interpretation of the scope of the regulation, or for DOE to respond to them, in writing or electronically. However, DOE’s internal procedures state that DOE is to maintain a database that includes a listing of and files for all inquiries, and other export-control agencies such as State and Commerce do require written or electronic submissions and responses for inquiries regarding jurisdiction. DOE officials said that they do not document all inquiries or responses because some inquiries are vague, and DOE’s responses are predecisional. However, as noted earlier, DOE’s responses to these inquiries are significant because of the time frames of the specific authorization process; they determine whether or not an activity is subject to the regulation and whether an exporter has to engage in the time-consuming authorization process. Documenting all inquiries and responses would provide DOE with the information needed to reasonably assure that the agency’s responses are consistent under similar circumstances, and to identify aspects of the regulation that may need clarification. Under the federal standards for internal control, agencies are to accurately record and appropriately document transactions. Documentation of transactions is also important because gaps can develop in an organization’s institutional knowledge and leadership as experienced employees leave. Some nuclear exporters expressed concerns in this regard, stating that, while the staff that currently implements Part 810 is competent and helpful, the system should not rely on individuals, and that a change of staff could make the process more difficult. DOE has taken steps to clarify Part 810, recognizing in its Supplemental Notice of Proposed Rulemaking that the scope of activities regulated under Part 810 could be clearer. For example, DOE is proposing to define some key terms, such as “technical assistance,” and to refine its definitions of other terms, for example by replacing its prior definition of “public information” with definitions of “publicly available information” and “publicly available technology,” so that potential applicants would have a clearer description of activities and technology subject to Part 810. However, DOE’s proposed rule neither clarifies the scope of the regulation by refining the definitions of other broad terms, such as “nuclear reactor,” or by providing an illustrative list of reactor components, nor more clearly delineates sales and marketing activities subject to Part 810. DOE officials have said that they plan to develop guidance once the proposed changes to the regulation are finalized, but the proposed changes are an ongoing effort whose time frame and eventual impact are unclear. DOE has taken limited actions to enforce its export controls for nuclear technology, assistance, and services, even though DOE must enforce Part 810 to achieve one of its goals for the regulation—effective threat reduction by mitigating the risk of proliferation. One way that DOE seeks to mitigate this risk is through conditions included in Part 810 specific authorizations; most authorizations are subject to common sets of conditions. DOE’s primary method for monitoring compliance with the conditions is for NNSA officials to read required reports from exporters and in some cases to conduct a more in-depth analysis of the reports. However, NNSA officials report that they typically conduct an in-depth analysis for compliance with the authorizations on less than 10 percent of the reports, and they do not have risk-based procedures for prioritizing which reports to analyze. DOE does not provide guidance for companies to self-identify and self-report violations. DOE has not determined whether it has the authority to impose civil penalties for violations of Part 810 and has not referred any potential violations to the Department of Justice (DOJ) for investigation or criminal prosecution within the last 6 years, the period covered by our review. On the basis of our analysis of all 89 specific authorizations approved between 2008 and 2013, we identified two common sets of conditions— one for deemed exports, another for all other exports—that DOE imposes on specific authorizations. These conditions are enumerated in “Secretarial Determinations”—the authorization letters signed by the Secretary that state a determination that the proposed export is not inimical to the interest of the United States, as long as the conditions are met. The conditions on each authorization reflect the actions that DOE, State, Commerce, NRC, and DOD judge sufficient to mitigate the risk of proliferation in a given circumstance and which result in the export benefiting U.S. interests. The common set of conditions on specific authorizations for deemed exports—in this case, foreign nationals who access nuclear technology in the United States—includes five conditions that appear in nearly all of the 18 authorizations for such exports (see table 3). These conditions require the company or other applicant seeking the authorization to (1) ensure that the foreign national maintains a current passport and work visa, (2) notify DOE promptly upon termination or change in immigration status for the foreign national, (3) submit to DOE for prior approval changes in the foreign national’s work duties, (4) report annually to DOE on activities pursued by each foreign national covered by the authorization, and (5) obtain a signed nonproliferation or nondisclosure statement from the foreign national. In addition, there are other conditions that have been imposed less frequently; for example, DOE imposed conditions on some specific authorizations involving transfers of certain technologies related to reactor operations to certain foreign nationals in the United States. These conditions state that these individuals cannot have access to sensitive nuclear technology or software programming language (see app. II). The common set of conditions for specific authorizations (other than deemed exports) includes four conditions that appear in nearly all of the 72 authorizations for these types of exports (see table 4). The first two conditions—a requirement to use the technology for peaceful (nonmilitary and nonnuclear weapons) purposes and a requirement to obtain permission before re-exporting the technology to a country other than the United States—are the responsibility of the importer and the importing country’s government to implement, and they are known collectively as “foreign government assurances.” The other two conditions are the responsibility of the exporter. These include requirements to (1) report to DOE on the activities conducted under the authorization on a quarterly, semiannual, or annual basis and (2) submit for prior DOE approval the names of any companies or individuals, beyond those listed in the original application, to which the exporter proposes transferring the technology. There are other conditions that have been imposed less frequently; for example, about 20 percent of the authorizations (13 of the 72) contain a condition that requires the importer and the importing country to take all measures necessary to maintain adequate protection of the technology and, in some cases, also to ensure adequate physical protection of any items derived from it (see app. II). NNSA officials, who implement Part 810, draw on various information sources to monitor compliance with the conditions on authorizations. NNSA officials said their primary source of information is the reports submitted by exporters. These reports are required by the conditions on the specific authorizations, as described above, as well as by Part 810, which contains reporting requirements for all specifically authorized exports and certain generally authorized exports. Exporters who are required to report on generally authorized activities must do so no more than 30 days after they begin. According to an NNSA official, some generally authorized activities would trigger frequent reports, so NNSA negotiates a filing frequency for the exporters to report all of their generally authorized activities on a consolidated basis instead of requiring reports for each activity. For specific authorizations, exporters must also report no more than 30 days after they initiate activities, and, depending on the conditions contained in their authorization, they are also required to submit ongoing reports that detail their activities conducted under the authorization on a quarterly, semiannual, or annual basis. NNSA officials stated that they read and categorize each report as it is received. If they decide that a particular report merits further attention, they conduct follow-up analysis, which includes checking that the activities and individuals listed are consistent with the application and the authorization. However, NNSA officials stated that they currently do not conduct such an analysis for every report to determine compliance or to identify trends; the officials estimate that they currently conduct follow-up analysis on less than 10 percent of reports. They also stated that they do not have procedures for determining which reports merit in-depth analysis and that their current practice is to decide on a case-by-case basis according to the type of technology and parties involved. As a result, NNSA may be missing important information that could lead to identification of violations and provide a fuller understanding of the degree of compliance with Part 810. We requested information on the number of reports NNSA received from 2008-2013 for generally and specifically authorized exports. For generally authorized exports, NNSA officials stated that they had a gap in their data that prevented them from providing complete information during these 6 years, but according to their data, they received at least 50 reports per year from 2009-2013. For specifically authorized exports, NNSA officials stated that providing information on the number of reports received would be challenging. NNSA officials said that their report analysis process is not as systematic as they would like, but noted that they do not have the staff to analyze the reports more thoroughly. According to NNSA officials, they employ two people who work full-time on Part 810 authorizations, as well as six people who work on the authorizations as part of their broader responsibilities. Officials at the national laboratories also assist with reviewing reports, based on the end user and the type of technology being transferred, according to NNSA officials. Staffing levels in the NNSA office that processes these authorizations and reviews the reports have remained level over the last 6 years, but the number of specific authorizations granted each year has increased (see fig. 4). An NNSA official noted that the office is looking into changes that could be made to the report analysis process to facilitate monitoring for compliance, such as linking the reports to the authorizations in the proposed e-licensing system. NNSA officials said they have other sources of information for monitoring compliance with Part 810 authorizations, including the national laboratories, trade publications, and newsletters from a variety of sources, as well as the companies themselves (NNSA periodically asks companies for briefings). They stated that they also receive support from the intelligence community, including DOE’s Office of Intelligence and Counterintelligence. In addition, according to a State official, U.S. embassies play a role in monitoring the extent to which the importing country and company or other entity, as well as the exporter, are complying with the conditions associated with Part 810 authorizations. DOE has not determined whether it has legal authority to impose civil penalties for violations of Part 810 and does not provide guidance for companies to self-identify and self-report possible violations. Part 810 contains a statement about the actions that DOE can take to prevent violations under the authority of the AEA (temporary injunctions and restraining orders) and a description of penalties for criminal violations. However, Part 810 does not indicate that DOE can impose civil penalties for violations, and DOE officials told us that the issue of whether the department has the authority to impose civil penalties was “unsettled.” We have previously found that civil penalties are an important element of regulatory enforcement, allowing agencies to punish violators appropriately and to serve as a deterrent to future violations. Without a clear position on whether DOE has authority to impose such penalties for violations of Part 810, DOE may not have access to a tool for enforcing its export controls. Furthermore, DOE does not provide any external guidance to exporters on enforcement of Part 810, such as a voluntary disclosure policy, internal compliance guidelines, or an enforcement manual, in the regulation, or, according to DOE officials, elsewhere. In contrast to DOE, other government agencies that regulate nuclear or nuclear-related exports have established procedures, as well as policies and guidelines on enforcement of their export controls. As discussed below, NRC, State, and Commerce provide a variety of resources for companies to understand the enforcement policies for their respective export control regimes and to provide incentives for companies to recognize and address violations. These resources are publicly available on the agencies’ websites. In addition, information on civil and criminal enforcement is stated in the regulations governing their respective export control regimes. NRC has an enforcement policy and enforcement manual. NRC has a publicly available enforcement policy document that lays out the general principles governing its enforcement efforts and information on the process it uses to deal with violations. NRC also has an enforcement manual that contains specific processes and guidance for implementing the enforcement policy. The stated goals of NRC’s enforcement policy are to (1) deter noncompliance by emphasizing the importance of compliance with regulations and other NRC requirements, and (2) encourage prompt identification and prompt comprehensive correction of violations. The policy clearly describes the factors that NRC takes into consideration when assessing the significance of the violation and describes how prompt self- identification of violations can decrease consequences for violators. In addition, NRC publishes on its website Notices of Violation, which can serve as examples of how violations are assessed and fines are determined. Its website also contains the Part 110 regulations, which describe, among other things, the civil penalties and the procedures through which they would be applied, in the case of violations. State’s website contains compliance resources, including guidelines for comprehensive compliance programs. State’s Directorate of Defense Trade Controls (DDTC) maintains a website with a variety of compliance-related resources and documents for exporters, including a list of significant export control enforcement cases. The site contains the International Traffic in Arms Regulations (ITAR), of which Parts 127 (Violations and Penalties) and 128 (Administrative Procedures) lay out State’s enforcement policies, including its voluntary disclosure policy, the aim of which is to strongly encourage self-disclosure of violations by noting that such disclosures may be considered as mitigating factors in determining penalties. The site also provides guidelines that exporters can use to create comprehensive operational compliance programs. The guidelines do not promote a certain type of program; instead, they list the important elements of effective programs, including organizational structure; corporate commitment and policy; identification, receipt and tracking of controlled items and technical data; re-exports; and internal monitoring, and training, among other elements. Commerce’s website provides a variety of compliance and enforcement information. Commerce’s Bureau of Industry and Security (BIS) has an Office of Export Enforcement (OEE) that works with companies to prevent export control violations and is responsible for enforcement actions in response to such violations. OEE’s website contains, among other things, information on compliance, penalties, and voluntary self-disclosures, including voluntary self-disclosure cases. The BIS website contains the Export Administration Regulations, which govern the export of dual-use items and certain military items. Part 764, “Enforcement and Protective Measures,” provides readers with information on enforcement, including voluntary self-disclosure and civil penalties, and Part 766, “Administrative Enforcement Proceedings,” describes the administrative enforcement process and includes guidance on how BIS makes penalty determinations. While DOE’s export controls and their regulatory basis may differ in some aspects from those administered by NRC, State, and Commerce, these other agencies provide information to companies and individuals to help them understand how to comply with their rules and the consequences of violating those rules. Several exporters told us that other agencies provide guidance that is more comprehensive. By not establishing policies or creating guidance that encourages companies to create strong compliance programs and self-identify and self-report violations, DOE is missing an opportunity to leverage exporters’ potential to play a greater role in monitoring their own compliance. Neither DOE nor DOJ have taken formal actions—such as revoking an authorization or prosecuting an exporter—to enforce Part 810 within the last 6 years, even though there have been violations of Part 810 within this period. Between 2008 and 2013, NNSA received at least 11 notices of voluntary disclosures of violations of the Part 810 regulations, mostly related to deemed exports to India or China—but according to an NNSA official, any time NNSA knows of a violation of the Part 810 regulations, NNSA tries to deal with it internally, generally meeting with the company to discuss the issue. This official reported that NNSA has not identified any willful violations of Part 810, and consequently, NNSA has not referred any potential criminal violations to the DOJ for investigation or prosecution. According to DOE and NNSA officials, NNSA has never taken any formal action, such as revoking an authorization, against companies that have violated Part 810. DOE’s internal procedures for administering Part 810 contain no information on DOE enforcement of the regulation. DOJ, which is charged with investigating and prosecuting suspected criminal violations, reports that of the cases charged under the AEA in the last 6 years, it is not aware of any related to Part 810 violations. The renewed interest in nuclear power worldwide could provide increased opportunities for U.S. companies. The highly competitive global nuclear market underscores the importance of an efficient authorization process for U.S. nuclear technology exports. DOE has stated that its goals for the Part 810 process are efficient regulation (defined by an efficient, timely, transparent, and predictable process); effective nuclear trade support; and effective threat reduction by better addressing proliferation challenges. DOE and NNSA have taken steps toward a more efficient regulatory process, including developing an e-licensing system. However, DOE and NNSA’s current implementation of Part 810 raises questions as to whether the agencies are administering the process in accordance with DOE’s goals and with key principles of federal regulation, which include clarity and consistency. DOE rarely meets its existing target time frames for processing Part 810 applications, which calls into question whether these targets are realistic and achievable in light of its resources and authorities. Furthermore, DOE has not established target time frames for obtaining the Secretary’s determination in the third stage of the process, or for the overall Part 810 authorization process. Without realistic and achievable targets for the entire Part 810 process, DOE cannot provide U.S. nuclear exporters with a timely and predictable regulatory process, which could impair their competitiveness. DOE has taken steps to clarify the scope of Part 810, but DOE officials plan to continue to rely on a case-by-case inquiry process. DOE currently does not document all inquiries, contrary to agency procedures. Without a documented inquiry process, DOE does not have the information it needs to provide reasonable assurance that its responses are consistent, and DOE officials are not documenting information that could identify parts of the regulation that may need clarification. DOE must enforce Part 810 to achieve one of its goals for the regulation—effective threat reduction by mitigating the risk of proliferation. However, DOE may be missing opportunities to enforce its nuclear export controls. Civil penalties are an important element of regulatory enforcement, but DOE has not determined whether it has the legal authority to impose civil penalties for violations of Part 810. In addition, NNSA does not conduct in-depth analysis on all reports from exporters on activities authorized under Part 810 and does not have a risk-based procedure for prioritizing which reports to analyze. As a result, NNSA may be missing important information that could lead to identification of violations and allow the agency to take enforcement actions when warranted. Moreover, unlike other agencies that administer nuclear- related export controls, DOE does not have policies or guidance for exporters about self-identifying, self-reporting, and correcting possible violations. Consequently, DOE is missing an opportunity to encourage exporters to recognize and address violations. We are making six recommendations to improve the administration of 10 C.F.R. Part 810. To better align the Part 810 process with its stated goal of efficient regulation, we recommend that the Secretary of Energy, working with the Administrator of the National Nuclear Security Administration, take the following two actions: Review existing targets for processing Part 810 applications and determine the extent to which they align with DOE’s resources and authorities. Based on the results of this review, establish realistic and achievable targets for each stage of the Part 810 process, including the third stage, as well as the overall process. As DOE moves forward with the e-licensing system, integrate these targets into the system to monitor agency performance against them to ensure that the targets remain realistic and achievable and that they improve predictability for exporters. To promote clarity and consistency in administering Part 810, we recommend that the Administrator of the National Nuclear Security Administration ensure that all inquiries about the scope of Part 810, together with NNSA’s responses to these inquiries, are documented, in accordance with existing DOE procedures. To facilitate enforcement of Part 810 and encourage compliance, we recommend that the Secretary of Energy, working with the Administrator of the National Nuclear Security Administration, take the following three actions: Determine whether DOE has legal authority to impose civil penalties for violations of the regulation and develop procedures accordingly. Develop a risk-based procedure for selecting exporters’ reports on authorized activities for in-depth analysis. Assess the need to establish and articulate export compliance policies that encourage and reward exporters who self-identify, self-report, and correct violations, and provide guidance to exporters on such policies. We provided a draft of this report to DOE, NRC, State, Commerce, DOD, and DOJ for review and comment. NNSA provided written comments for DOE, which are presented in appendix III. In addition, NNSA, NRC, State, Commerce, and DOJ provided technical comments that we incorporated, as appropriate. In its written comments, NNSA agreed with all six of our recommendations and noted several actions and initiatives it is planning or undertaking to implement our recommendations. For example, NNSA stated that as part of its ongoing process improvements, the agency is working to identify gaps, overlaps, and inefficiencies in the Part 810 authorization process and will establish new, achievable targets for each stage of the Part 810 process. Among other things, NNSA also stated that it plans to consult with other regulatory agencies, such as NRC, to determine what risk-based procedures the agency has for analyzing reports on authorized activities and whether they could be modified to work for Part 810 reports. NNSA also stated that it would consult with regulatory agencies such as NRC and Commerce to determine what export compliance policies they have for encouraging and rewarding self- disclosure and whether they could be modified for Part 810 self-reporting. NNSA also provided general comments on some of our findings. For example, NNSA stated that the draft report frequently draws comparisons between DOE’s Part 810 process and other agencies’ export control regimes. NNSA stated that, unlike the other regimes, DOE’s export authorization process involves other agencies and diplomatic engagements with foreign governments, whose responsiveness the U.S. government cannot control. We note that our analysis considered relevant differences in the export control regimes. As noted above, NNSA concurred with our recommendations and stated that it would consider whether the processes of these agencies could be adapted for Part 810. NNSA also stated that the ability to devise “creative solutions” for unique or new situations remains an important aspect of the Part 810 authorization process, and that consistent guidance in light of such situations is inapplicable. However, as noted in the report, DOE must reasonably assure that its interpretation of Part 810 is consistent in responding to wide-ranging questions from exporters. In addition, NNSA stated that the Department clearly took seriously the recommendations from our report, Nuclear Commerce: Governmentwide Strategy Could Help Increase Commercial Benefits from U.S. Nuclear Cooperation Agreements with Other Countries (GAO-11-36), as evidenced by the current rulemaking, process improvements, and the creation of an e-810 system. We noted the actions DOE took in response to these recommendations in the current report. However, because the rulemaking and process improvements were ongoing at the time of our audit, we could not evaluate the extent to which these initiatives will address the findings and recommendations in this report. NNSA said that our draft report stated that DOE had not proposed revising its inquiry process, but noted that its initiatives will address the inquiry process, and that the inquiries we referred to are exploratory and informal. We clarified the language in the report to address NNSA’s comment. However, as we say in the report, several exporters whom we interviewed expressed concern about the consistency of the responses DOE was providing to their inquiries. We could not evaluate whether DOE’s responses to inquiries were consistent because DOE does not document all inquiries. Without such documentation, DOE cannot reasonably assure that the interpretations offered in response to these inquiries are consistent. Finally, in its written comments, NNSA stated that it is true that it has not referred any suspected Part 810 violations to the Department of Justice for criminal investigation or revoked any authorizations for cause but that it has not received reports of illicit technology transfers or seen evidence of violations of Part 810 authorization restrictions. We recommend in this report, as a step in strengthening export controls through Part 810, that DOE take a risk-based approach to reviewing reports for in-depth analysis from exporters and assess the need for guidance and incentives to exporters for self-identifying, self-reporting, and correcting possible violations. NNSA agreed with these recommendations. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Energy, the Administrator of the National Nuclear Security Administration, the Secretary of State, the Chairman of the Nuclear Regulatory Commission, the Secretary of Defense, the Secretary of Commerce, and other interested parties. The report also will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact David C. Trimble at (202) 512-3841 or trimbled@gao.gov or Thomas Melito at (202) 512-9601 or melitot@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. In this report, we examine (1) Part 810 processing times, compared with the Department of Energy’s (DOE) targets, for over the last 6 years; (2) the extent to which Part 810 is clear and DOE can reasonably assure consistent interpretation; and (3) the extent to which DOE enforces Part 810. To examine DOE’s processing times for Part 810 applications over the last 6 years compared with its own targets, we reviewed DOE’s 10 C.F.R. Part 810 Assistance to Foreign Atomic Energy Activities Part 810 Program Elements and the National Nuclear Security Administration’s (NNSA) procedures for processing, reviewing, and approving specific authorizations to determine DOE’s internal targets. We analyzed DOE data on the processing times for the 89 specific authorizations granted from 2008 through 2013. For each authorization, the analysis included a calculation of the number of days between the date of each application for authorization and the date of the Secretary’s determination, which encompasses the entire Part 810 process. We also calculated the number of days between each of the three stages of the process—initial review, interagency review, and final review. We calculated the duration of the internal review stage based on the number of days between the date on the application and the date NNSA forwarded the application to the interagency. This date marked the beginning of the interagency stage, which ended when NNSA received the last interagency concurrence with the application package. The final review stage started from the date of the last interagency concurrence and ended on the date of the Secretary of Energy’s determination. To ensure the analysis was as accurate as possible, we reviewed the data, identified irregularities, and contacted NNSA officials to clarify and correct those irregularities. For example, when we noticed an application in which the Department of Commerce’s concurrence date preceded the date NNSA submitted the application to the interagency, we notified NNSA officials of this inconsistency, and they provided us with the correct date. Moreover, we interviewed the NNSA officials who collected and recorded the data, about the procedures they follow to ensure the data are accurate, complete, and reliable. On the basis of our review, we concluded that the data were sufficiently reliable for purposes of analyzing trends in processing times. To identify factors affecting the processing times for specific authorizations, we selected a nonprobability sample of eight applications that represented a range of processing times. Specifically, for each stage in the Part 810 process as well as for the entire process, we selected (1) one case from among applications with short processing times, defined as processing times in the 25th percentile (that is, processing times shorter than those for 75 percent of all applications), and (2) one case representing long processing times, defined as applications in the 75th percentile (that is, processing times longer than those for 75 percent of all applications). Among applications with long processing times, we considered those with over twice the median processing time. We used the median—rather than the mean—because outliers in the data unduly impact the size of the mean, making it a less valid representation of the typical processing time. From applications with long and short processing times, we selected cases that represented a range of countries and types of exports or assistance, such as computer codes, consulting services, and advanced reactor technologies. The small number of cases selected precluded us from generalizing the results, but the case study analysis provided examples of factors that may explain the varying processing times. To identify these factors, we reviewed application packets NNSA provided to us, including technical assessments and intra- and inter-agency correspondence. When reviewing case study documents, we noticed that some of the correspondence dates differed from the dates recorded in the spreadsheet. Because these discrepancies are small, they do not significantly impact the results of our aggregate data analysis, which measures the duration (i.e., number of days) between each stage of the process and the overall process. The small discrepancies also do not impact the findings of our case study analysis, which focuses on the causes of long and short processing times. We also interviewed agency officials to better understand these factors. For seven of the eight applications in our case study, as well as for the three applications that NNSA provided to us as samples, we were able to determine, based on the last internal concurrence among DOE and NNSA staff, the earliest date that the recommendation could have been provided to the Secretary upon receipt of interagency comments to determine whether the recommendation was provided to the Secretary within 30 days—the target time frame. In one case, the correspondence was not dated, and we could not determine the date of the recommendation to the Secretary. In nine other cases, we were able to determine whether the time elapsed between the receipt of interagency comments and the last internal concurrence among DOE and NNSA staff—which must precede the recommendation to the Secretary— exceeded 30 days. To examine the impacts of Part 810 processing times on U.S. nuclear exporters, we interviewed representatives of these exporters and reviewed public comments submitted in response to DOE’s proposed changes to Part 810, as well as DOE’s response to comments made in response to the Notice of Proposed Rulemaking, as articulated in the preamble to the Supplemental Notice of Proposed Rulemaking. The representatives we interviewed included representatives of companies, as well as representatives of four associations (the Nuclear Energy Institute (NEI), American Nuclear Society (ANS), Nuclear Infrastructure Council (NIC), and Association of University Export Control Officers (AUECO), and reviewed the public comments of a fifth (the Ad-Hoc Utilities Group)). The companies were either identified through interviews with association representatives—we requested that they identify nuclear exporters with experience with the Part 810 authorization process for us to interview— or by GAO (for example, at public meetings and other forums on nuclear export issues, or through their public comments). We then interviewed five exporters, including reactor designers and manufacturers, engineering service providers, and fuel companies, and obtained written comments from a nuclear energy technology company. We also selected for interviews, based on recommendations from industry associations and on our reviews of public comments and letters, representatives from a consulting group that exports nuclear services and from a utility company, which is the largest commercial nuclear generator in the United States, and from two law firms. The law firms were selected because of their expertise and experience in U.S. nuclear export controls. To learn more about the relevance of regulation of civilian nuclear technology to nonproliferation more generally, we interviewed five nonproliferation experts. To examine the extent to which the scope of Part 810 is clear, we consulted and analyzed the Atomic Energy Act, as well as executive orders and Office of Management and Budget bulletins related to government regulation. We also reviewed the Part 810 regulation. We interviewed DOE officials and a variety of entities regulated or potentially regulated under Part 810, as well as various groups representing these entities—as described above—for their views on the clarity of the regulation. We also consulted public comments submitted in response to DOE’s proposed changes to Part 810, as well as DOE’s response to the comments received in response to the Notice of Proposed Rulemaking as articulated in the preamble to the Supplemental Notice of Proposed Rulemaking. To examine the extent to which DOE can reasonably assure that the regulation is consistently interpreted, we consulted DOE’s 10 C.F.R. Part 810 Assistance to Foreign Atomic Energy Activities Part 810 Program Elements and the federal standards for internal control, and interviewed DOE officials and a variety of entities regulated or potentially regulated under Part 810, as well as various groups representing these entities, as described above. To examine the extent to which DOE enforces its nuclear export controls, we first determined the activities DOE undertakes to monitor conditions imposed through authorizations by interviewing DOE and NNSA officials. We then conducted an analysis of the conditions imposed through the 89 Part 810 authorizations approved from 2008-2013. The conditions for each authorization are documented in determination letters signed by the Secretary of Energy, and we conducted a double-blind content analysis of the 89 letters to determine the range and frequency of conditions. Specifically, two analysts independently reviewed the 89 letters and recorded the range and frequency of conditions in separate documents. Then the analysts compared their assessments and resolved any differences through discussion. To describe DOE’s authorities to enforce these conditions, as well as the actions DOE has taken to enforce them, we reviewed the Atomic Energy Act and 10 C.F.R. Part 810. We also interviewed DOE, NNSA, Federal Bureau of Investigation, and Department of Justice officials and obtained information on enforcement actions. To describe the information DOE provides on its enforcement of Part 810, we reviewed DOE’s 10 C.F.R. Part 810 Assistance to Foreign Atomic Energy Activities Part 810 Program Elements and interviewed DOE and NNSA officials. We also interviewed representatives from entities regulated or potentially regulated under Part 810, as described above. To determine the information provided by other agencies that administer related export control regimes, we reviewed relevant regulations and publicly available information on NRC and the Departments of State and Commerce’s enforcement policies, including enforcement manuals and voluntary disclosure guidelines; and interviewed NRC officials. We also interviewed two export-control compliance experts, recommended to us based on their expertise, and representatives from two law firms with expertise and experience in U.S. nuclear export controls. We conducted this performance audit from August 2013 to October 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Tables 5 and 6 contain information on the common conditions imposed on specific authorizations for exports of nuclear technology under 10 C.F.R. Part 810 granted from 2008 through 2013, as well as examples of less common conditions. In addition to the individual named above, Glen Levis (Assistant Director), Jeff Phillips (Assistant Director), Alisa Beyninson, Antoinette Capaccio, Pamela Davidson, R. Scott Fletcher, Grant Mallie, Cynthia Norris, John Rastler, and Jennifer Young made key contributions to this report.
Encouraging U.S. exports of civilian nuclear products, services, and technology while ensuring they are not used for foreign nuclear weapons programs is a fundamental goal of U.S. policy. Exports of U.S. civilian nuclear technology, assistance, and services are regulated by DOE through 10 C.F.R. Part 810. Depending on the importing country and technology, exports can be generally authorized, with no application required, or specifically authorized, in which case the exporter must submit an application to DOE. The Departments of Commerce, Defense, and State, as well as the Nuclear Regulatory Commission, also review the applications, which must finally be approved by the Secretary of Energy. GAO was asked to examine the Part 810 process. This report examines (1) Part 810 processing times over the last 6 years compared with DOE's targets; (2) the extent to which Part 810's scope is clear and DOE can reasonably assure consistent interpretation; and (3) the extent to which DOE enforces Part 810. GAO analyzed all 89 specific authorizations granted from 2008-2013 and interviewed key agency officials and U.S. nuclear industry representatives. The Department of Energy (DOE) has consistently missed its 30-day targets for the initial and interagency stages of the Part 810 review process (see table). From 2008 through 2013, DOE missed the target for the initial review stage for 80 of 89 applications processed, and interagency review times missed DOE's 30-day target for 85 applications. DOE has not established a target for the entire final review stage, which had the longest median review times, or for the overall process. DOE has acknowledged exporter concerns that processing times for specific authorizations can impose business risks, and DOE officials have proposed initiatives to reduce processing times. a The scope of Part 810 is unclear, and DOE's inquiry process does not reasonably assure that the regulation is consistently interpreted. For example, it is unclear what marketing activities are covered by Part 810. DOE has not provided written guidance to clarify the regulation's scope, instead directing exporters to inquire with DOE officials. DOE officials said that they do not document all such inquiries or their responses. Without such documentation, DOE can neither reasonably assure that its responses are consistent, nor can it analyze the inquiries to identify parts of the regulation that may need clarification. DOE is taking some steps to clarify Part 810 by defining or refining some key terms. However, DOE's revisions do not address all terms that exporters have identified as unclear, and the time frame of DOE's revisions is unknown. DOE has taken limited actions to enforce Part 810. DOE's primary method for monitoring compliance with Part 810 is reading reports from exporters, but according to DOE officials, they conduct in-depth analysis on less than 10 percent of reports and do not have a risk-based procedure for selecting reports to analyze. Also, because DOE does not provide guidance for companies to self-identify and self-report possible violations, DOE is missing an opportunity to leverage exporters' role in monitoring their own compliance. DOE has not yet determined whether it has the legal authority to impose civil penalties for violations of Part 810. According to DOE officials, DOE has never taken a formal action for a violation of Part 810, such as revoking an authorization or referring a potential violation to the Department of Justice (DOJ). Furthermore, DOJ officials reported that they are not aware of any prosecutions related to Part 810 violations from 2008-2013, the time frame GAO reviewed. GAO recommends that the Secretary of Energy take several actions to improve the Part 810 process, such as determine whether DOE has legal authority to impose civil penalties, and establish realistic and achievable targets for each stage of the Part 810 process, as well as the overall process. DOE agreed with the recommendations.
Hurricanes Katrina and Rita caused catastrophic destruction to the Gulf Coast region, with an estimated combined total of $160 billion in damage. Estimates indicate that Hurricanes Gustav and Ike also caused billions of dollars in damage along the Gulf Coast region. FEMA assists disaster victims in part through its Individuals and Households Program (IHP), a component of the federal disaster-response efforts established under the Robert T. Stafford Disaster Relief and Emergency Assistance Act. FEMA determines whether individuals or households meet eligibility requirements for IHP assistance after they apply for registration either online or over the telephone. Applicants must submit identification information, including name, Social Security Number (SSN), and date of birth. Applicants must also provide a legitimate address affected by the hurricane; FEMA guidelines specify that eligibility for housing assistance is predicated on the registrant being displaced from his or her primary residence. IHP assistance can include temporary housing, home repair and personal property replacement, and other necessary expenses related to a disaster. For Hurricanes Katrina and Rita, FEMA also activated expedited assistance to provide immediate cash—in the form of $2,000 payments—to eligible disaster victims to help with emergency needs for food, shelter, clothing, and personal necessities. Activating expedited assistance allowed FEMA to provide aid to disaster victims without requiring proof of property damage or other losses. FEMA did not activate expedited assistance for Hurricanes Gustav and Ike, although it did offer limited fast- track payments for individuals with critical needs as a result of Hurricane Gustav. As of March 2009, FEMA states that it has distributed approximately $665 million in IHP assistance to victims of Hurricanes Gustav and Ike, as compared to almost $8 billion for Hurricanes Katrina and Rita. This amount includes rental assistance, lodging, repairs, replacement, and other needs assistance. Since Hurricanes Katrina and Rita, FEMA has improved its controls over identity and address verification and inspections, housing assistance in FEMA-paid-for hotels, and duplicate registrations. Improvements in these three key areas have reduced FEMA’s risk of making payments based on fraudulent disaster assistance registrations. For example, for Hurricanes Ike and Gustav, FEMA conducted identity and address verification on all applications and required inspections prior to approving rental assistance. In addition, FEMA required individuals in need of housing assistance to provide valid registration numbers before checking into FEMA-paid-for hotels. FEMA has also taken steps to flag duplicate registrations submitted for the same disaster. Although these improvements are significant, our work shows that an identity thief or a persistent fraudster with basic counterfeiting skills could still obtain rental or hotel assistance by exploiting existing weaknesses in the registration and approval processes. In particular, we were able to bypass verification controls by submitting more sophisticated bogus identities and by providing FEMA with fictitious documentation to validate our registration information. For one of our registrations, these weaknesses allowed us to obtain thousands of dollars in rental assistance, approval for transitional housing, and duplicate reimbursements for fictitious hotel expenses. We were successful on this application not only because we submitted fictitious documentation, but also because FEMA’s inspector failed to properly inspect our bogus damaged address. For other applications, falsified supporting documentation allowed us to obtain approval for transitional housing, and in one case we subsequently checked into two different hotels. Finally, we found that FEMA was unable to prevent duplicate registrations submitted for more than one disaster. The following information describes (1) the control weaknesses related to identity and address verification and inspections that we identified during our work on Hurricanes Katrina and Rita, (2) the improvements we found as a result of our undercover tests during Hurricanes Gustav and Ike, and (3) flaws that still exist in the identity and address verification and inspection processes. Weaknesses in Address and Identity Verification and Inspections Identified after Hurricanes Katrina and Rita: As we reported previously, we found significant flaws in the process that FEMA used to approve individuals for disaster assistance payments after Hurricanes Katrina and Rita. For example, although FEMA subjected Internet applications to an identification verification process, it did not use this verification process for phone applications. Specifically, for Internet applications, a FEMA contractor used credit and other information to confirm that (1) the applicant’s SSN matched with an SSN in public records and (2) that the SSN did not belong to a deceased individual. Applicants who were rejected through the Internet were advised to apply over the phone. However, phone applications were exempt from any identity verification. In addition, prior to providing assistance payments, FEMA did not use public records or inspections to verify the physical location of damaged addresses, nor did it confirm that applicants actually occupied a damaged address at the time of the disasters. As a result of these weaknesses, we were able to receive disaster assistance by using fictitious names and nonexistent addresses. For example, for one of our Hurricane Katrina applications, we used an empty lot in Louisiana as our damaged address. Although this damaged property address was clearly bogus, FEMA notified us that an inspector had confirmed that the property was damaged and subsequently sent us thousands of dollars in rental assistance. Through data mining, we identified cases where other applicants received assistance by using SSNs belonging to deceased individuals and by using storefronts, post office boxes, cemeteries, and nonexistent apartments as damaged addresses. Other cases we identified involved applicants that claimed to live at valid damaged addresses, even though they were actually incarcerated or living in states not affected by the Hurricanes. Improvements Identified during the Response to Hurricanes Gustav and Ike: FEMA made several improvements to the verification and inspection processes. For example, FEMA told us that the same identity-verification process is now automatically performed when an applicant applies through the Internet and over the phone. In addition, both Internet and phone applications are now subject to automatic address and occupancy verification. Address verification includes checks to confirm that an address is deliverable; is not a post office box or a business address; and is not a “high-risk,” address such as a tattoo parlor, or a pawn shop. Occupancy/ownership verification confirms that an applicant occupies or owns the property through a check of property records. Applicants who register over the telephone and fail any of these verification tests still receive registration numbers, but FEMA requests additional documentation prior to any payments being made. According to FEMA, applicants can verify their identities by submitting tax forms, marriage licenses, or government-issued identification. Address and occupancy can be verified by submitting documents such as drivers’ licenses, utility bills, and property-tax records. An applicant can fax the supporting documentation to FEMA or wait and provide them to an inspector. FEMA also told us that even if an applicant passed both identity and address verification, an inspector must meet with an applicant to further verify occupancy and to confirm that a property was damaged in order to be eligible for rental assistance. Our undercover applications for Hurricanes Gustav and Ike confirm these improvements, as described in the following examples: Five of our 10 applications initially failed identity verification. For these 5 applications, we used falsified identification information similar to what we used for Hurricanes Katrina and Rita. Specifically, for these applications, we used either completely fabricated names and SSNs, or names with valid dates of birth and SSNs but without any credit history, such as credit card or bank activity. We could not successfully register some identities by using the Internet and were instructed to apply by phone. At the end of the phone application process, FEMA call center operators provided us with registration numbers but also told us that there were “verification errors” associated with our registrations. Although the operators told us that inspectors would be contacting us to schedule an inspection of our property, we were instructed to provide additional documentation to validate our identities. All 10 of our applications initially failed address and occupancy verification. For all 10, we used fabricated address information, including street addresses that did not exist and the addresses of local municipal buildings. When we later reviewed our applications with FEMA, we found that all 10 were flagged as having errors, in part because the addresses we used were not deliverable or because the names we used did not match property records associated with the addresses. The inspection process prevented us from receiving rental assistance for 9 of our 10 applications. Specifically, the 9 addresses we selected for these applications were either not private residences or they were not actually damaged by the hurricanes. Therefore, although FEMA inspectors left messages requesting that we schedule inspections, we did not meet with them. For example, for 1 of our applications we used the address of a Texas elementary school in an area affected by Hurricane Ike. Prior to scheduling an inspection, the inspector called us from the school requesting clarification as to where we resided. We discontinued the application as a result of this call. Continued Weaknesses in Address and Identity Verification and Inspections: We were able to circumvent FEMA’s initial controls by using valid identities with credit histories and by submitting fabricated identification and address information. For one of our registrations, these weaknesses, coupled with FEMA’s failure to correctly inspect our fictitious address, allowed us to obtain rental assistance and duplicate reimbursements for fictitious hotel expenses. Six of our 10 applications passed identity-verification controls on the first try through the Internet and over the phone, in part because we simulated the actions of an identity thief by using identities with legitimate dates of birth, SSNs, and credit histories. Because some of these identities were valid, FEMA appropriately did not find any verification errors. However, FEMA also did not identify the fact that one of the identities with a credit history showed that we lived outside the areas affected by the hurricanes. For 1 of our applications, we used a name and SSN that were linked to credit records in Virginia, with no record of activity in Texas or the surrounding area. In this way, a fraudster could steal an identity from anyone in the country and use it to pass FEMA’s identity tests. Five of our 10 applications eventually passed either identity or address verification or both because FEMA accepted fabricated supporting documents we submitted as legitimate. For example, for 1 of the applications, we registered by phone using a completely fake name, date of birth, and “999-XX-XXXX” as our SSN. FEMA requested that we provide additional documentation to prove our identity, so we faxed in a bogus college transcript. When we subsequently reviewed our applications with FEMA, we found that this bogus transcript was deemed sufficient proof of identification. Similarly, we were able to submit fabricated tax forms and utility bills to prove address and occupancy. When we asked FEMA officials about the process for handling supporting documentation, they told us they do not take any steps to verify the documents. The officials said that they only check to see whether the document appears to be tampered with. If it does, FEMA case workers or contractors will verify the document by calling any phone numbers listed on the document or performing Internet research. If the document appears to be valid, then no additional checks are performed. According to FEMA, our fabricated documents did not appear to be tampered with and therefore were immediately accepted as legitimate. One of our applications received thousands of dollars in rental assistance because FEMA accepted our fabricated supporting documents and because FEMA approved the application without the inspector correctly inspecting the property or meeting with us in person. This application was also approved for a free hotel room and received duplicate payments for previously incurred hotel expenses. For this application, we used a name with a valid date of birth and SSN, but without any credit history. For our damaged address, we used a nonexistent street number on a real street in an area of Texas affected by Hurricane Ike. In response to FEMA’s request for identity verification, we submitted an IRS form 1099, which can easily be found on the Internet, claiming that we worked for a bogus landscaping company on a nonexistent street. We also submitted a fabricated utility bill to verify our occupancy. A FEMA inspector attempted to contact us to schedule a date for an inspection, but we never set up a meeting. Ultimately, we were notified that we were eligible for rental assistance and housing assistance in a FEMA-paid-for hotel. However, because the approved dates for obtaining a hotel room were about to expire, we subsequently asked FEMA to reimburse us for previously incurred hotel expenses. As proof of our stay in the hotel, we submitted a bogus bill we created by changing the name and address on a letterhead from a hotel in the Washington, D.C., area. In total, we received just over $6,600 in assistance from FEMA for this application, including $4,465 for rental assistance and $2,197 for hotel- expense reimbursements. The $2,197 in hotel-expense reimbursements we received included duplicate reimbursements for our hotel expenses: one check for $1,098.50 from FEMA and another check in the same amount from FEMA’s hotel contractor. Figure 1 depicts one of the rental assistance checks. In reviewing this application with FEMA officials, we asked why we received rental assistance without an inspection. FEMA told us that the inspector had performed an inspection and noted that the entire street where our fictitious address was supposed to be was destroyed. Although FEMA initially blocked us from receiving assistance because we were not present during the inspection, the case worker chose to override this decision because the case worker believed that the destruction of the entire street indicated that we had an immediate need for assistance. FEMA officials emphasized that the case worker should not have taken this action and we should not have received rental assistance. Finally, with regard to the duplicate payments we received for hotel expenses, FEMA told us that we may have received these payments because of a breakdown in the reimbursement process. Specifically, both FEMA and its lodging contractor made payments for expenses incurred at hotels by approved disaster applicants. FEMA typically sends a list of payments it has already made to the contractor. Using a manual process, the contractor reviews this list to determine what payments need to be made. With regard to the duplicate payment we received, the FEMA officials we spoke with speculated that the contractor simply missed the payment by FEMA during its review. After we brought this issue to their attention, FEMA officials told us that they were already conducting a review of the process to if the duplicate payment problem was widespread. As a result of this review, FEMA found that the lodging contractor made four additional duplicate payments. FEMA has flagged these payments for recoupment. The following information describes (1) the control weaknesses related to FEMA’s hotel housing program that we identified d Hurricanes Katrina and Rita, (2) the improvements we found as a result of our undercover tests during Hurricanes Gustav and Ike, and (3) flaws tha t still exist in the hotel-assistance approval process. Weaknesses in the Hotel-Assistance Approval Process Iden after Hurricanes Katrina and Rita: Following Hurricane Katrina, FEMA provided displaced individuals with free hotel accommodations. However, FEMA did not require the hotels to collect registration information (such as FEMA registration numbers or SSNs) on individu staying in the free rooms. Without this information, FEMA was not abl ensure that only valid disaster victims were receiving free hotel accommodation s. As a result, we found that individuals stayed in free hotel rooms even though they were not eligible to receive any type of disaster assistance because they had never lived in residences dama the hurricanes. Improvements Identified during the Response to Hurricanes Gustav and Ike: According to FEMA, it strengthened controls over hotel assistance by requiring applicants seeking free lodging to (1) obtain a registration number from FEMA and (2) pass both identity and address verification. Once registrants received approval to check in to a h had to provide the hotel with a valid registration number, picture ID, the last four digits of an SSN so that the hotel could check this inform against a database maintained by FEMA’s hotel contractor. Our undercover work confirmed that these controls were effective. For example, without applying for assistance and obtaining registration numbers, our investigators tried seven times to obtain hotel rooms just by claiming that they were victims of Hurricane Ike and showing bogus Tex drivers’ licenses . They were denied rooms every time. In addition, when we tried to obtain hotel rooms with FEMA registration numbers that had not passed the identity and address-verification process, we were again denied rooms. cess: Continued Weaknesses in the Hotel Assistance Approval Pro Despite the improvements we identified, we were still approved for hotel assistance on 4 of our 10 applications after we obtained registration numbers and passed identity and address verification using bogus supporting documentation. For one of these applications, we still rec approval for transitional housing even though FEMA noted that the utility ill we submitted to prove our address was illegible. Ultimately, we b checked into two different hotels using one of our bogus identities. The following information describes (1) the control weaknesses related to duplicate payments and registrations we identified during our work on Hurricanes Katrina and Rita, (2) the improvements we found as a result of our undercover tests during Hurricanes Gustav and Ike, and (3) flaws thatstill exist in the process FEMA uses to detect duplicate registrati ons. Weaknesses in Detecting Duplicate Registrations Identified after Hurricanes Katrina and Rita: FEMA did not detect duplicate registrations or prevent duplicate payments after Hurricanes Katrina and Rita. We identified instances where FEMA made more than one payment to the same household that shared the same last name and damaged an d current addresses. FEMA also made millions of dollars in duplicate payments to thousands of individuals who submitted claims for damages to the same primary residences for both Hurricanes Katrina and Rita. FEMA officials explained that victims of both disasters are allowed only one set of IHP payments for the same damaged address and therefore on entitled to payments based on a single registration. Improvements Identified during the Response to Hurricanes Gustav and Ike: Improved data checks enabled FEMA to successfully prevent us from applying twice for Hurricane Gustav using the same identity. For example, we used the same damaged and current address information for two of our applications. When we subsequently revie our applications with FEMA officials, we saw that one of the applica had been flagged as being a duplicate and was about to be cancelled. ded We observed several deficiencies in the customer service FEMA provito disaster victims. Specifically, had we been real disaster victims without e Internet access, we would probably have been unable to obtain assistanc in the immediate aftermath of the hurricanes. We also called actual disaster victims, many of whom told us that they experienced similar problems. According to FEMA, these problems occurred in part because the initial call center staffing model it developed for the 2008 hurricane season was overwhelmed by members of the media and high-level government officials encouraging the public to contact FEMA. However, data we received from FEMA show that these call centers were actually staffed well below FEMA’s own estimates of peak staffing needs fo the hurricanes. FEMA told us that this staffing deficiency was caused, in part, by difficulties associated with one of its co that it had not planned to staff call centers up to levels necessary to handle peak call-volume needs. Despite problems we noted with FEMA’s ntractors, but also stated customer service following the hurricanes, it intends to rely on the operational plan for the 2009 hurricane season. Although we encountered little or no difficulty when applying for assistance over the Internet, we observed several problems with FEMA’s customer service when we made applications by phone. The following examples describe some of the problems we encountered: Busy phone lines and long wait times. We could not immediatel get through to the call centers when applying by phone. For one of o Hurricane Ike applications, an investigator had to call nine times over the course of 3 days before being able to speak to a call center staf member. During these calls, the investigator either got a recording saying “all agents are busy; try later” or was put on hold for 15 to 20 minutes before hanging up. On another Hurricane Ike application, the investigator called five times over the course of three days before getting through to a call center, experiencing similar busy messages and wait times. On a Hurricane Gustav application, the investigator had to call after 1:00 a.m. in order to speak with an operator. We identified similar problems when calling FEMA’s help line to check on the s tatus of our applications. For example, one investigator called the help line 13 times over the course of 8 days but never got through to an operator. Incorrect information. Call center staff did not always give us accurate information. For example, although some of our fictitious applicants were told that inspectors would call to schedule inspection even though the applicant did not know the extent of damage to his property, one of our inv for an inspection unless he provided a more precise account of his property damages. For another application, we had to fax supp documentation in multiple times because we were initially given an incorrect fax number. estigators was told he would not be scheduled Delayed notification for hotel assistance. For two of our registrations that were approved for temporary housing, FEMA did no notify us in a timely manner, which prevented us from obtaining a hot room. s, In an effort to understand the experiences of actual disaster victim we contacted registrants chosen from a database provided by FEMA About half of the individuals we spoke with told us that they did not experience any problems with FEMA’s application process; the other half confirmed that they encountered delays in getting through to . FEMA operators, problems scheduling inspections, and difficulties obtaining hotel rooms once they had been approved. FEMA permits registration for assistance over the Internet, but power outages may have forced many victims to seek assistance over the telephone. Table 1 highlights 10 of our conversations with disaster victims. FEMA cited several factors that contributed to poor customer service in the aftermath of Hurricanes Ike and Gustav: a higher-than-expected call volume, unmet staffing needs, contractor failure, and problems with its automatic call system. FEMA told us that although it intends to use a different contractor for the 2009 hurricane season, the agency will make no other changes to its call center operational plan. Higher-than-Expected Call Volume: FEMA told us that they received what they described as an overwhelming number of calls, especially from individuals that may not have otherwise asked for assistance, because the media and high-level government officials strongly encouraged the public to contact FEMA. For example, FEMA estimated that it would receive approximately 530,291 calls requesting assistance for Hurricanes Gustav and Ike, but it actually received a total of 1,195,213 calls—125 percent more than expected. FEMA officials also stated that many individuals who called FEMA had unrealistic expectations as a result of the widespread coverage of hurricane Katrina. In particular, many applicants called because they expected to receive an immediate $2,000 expedited assistance payment. Projected Call Center Needs Unmet: Data provided by FEMA show that FEMA fell short of its anticipated peak staffing needs. According to FEMA, call centers are typically staffed with a baseline number of personnel before a disaster takes place. To determine staffing, FEMA primarily relies on historical models, and the type and the size of a disaster. If FEMA determines that additional staff are needed after a disaster occurs, it relies on an interagency agreement with the Internal Revenue Service (IRS) and on contractors. According to FEMA, its four call centers were staffed with a baseline of 684 staff before Hurricanes Gustav and Ike hit. In preparation for Hurricane Gustav, FEMA determined that peak staffing levels at the call centers could be as high as 6,300 staff by September 4, 2008, 3 days after the hurricane would make landfall. However, FEMA data show that actual staffing levels were just below 1,100. In addition, FEMA determined that peak staffing levels at the call centers could be nearly 11,000 staff by September 15 in order to handle calls for both Hurricanes Ike and Gustav. However, once Hurricane Ike made landfall on September 13, FEMA data show there were only 1,378 personnel staffed at the call centers—75 percent below staffing estimates for that day. When asked about the significant difference between staff on hand and anticipated staffing requirements, FEMA officials stated staffing to meet short-term peaks is inefficient as it would require substantial resources to hire and train staff to peak levels, only to release them shortly thereafter due to decreased call volume. Contractor Failures: FEMA said that one contractor was not able to supply a sufficient number of staff in a short period of time, resulting in a lack of staff available at call centers. Specifically, FEMA told us that it entered into a temporary service contract awarded through the General Services Administration (GSA) to augment its call center staff. This contract limited the proposals to only those companies on the GSA schedule that were small businesses—businesses that FEMA believes were not equipped to handle its staffing issues. FEMA said that by the time it learned that only small businesses were under consideration, it could not afford to consider alternative routes. In addition, FEMA said that one of the small businesses it chose to work with indicated that it intended to team up with a large national staffing services company with greater resources, which initially gave FEMA confidence that the contractor could meet its staffing needs. However, FEMA said that it took over 2 weeks for the contractor to supply the numbers of temporary workers required to address the large call volume. In addition, as a change after Hurricane Katrina, call center operators had to undergo security screening prior to being able to work at the call centers. Before Katrina, operators could start work while the security check was in progress. FEMA said that this heightened security check prevented the contractor from providing additional staff in a timely fashion. FEMA officials told us they will not be using the same contractor for the upcoming hurricane season. Automatic Call System Issues: With regard to the issues we identified related to obtaining timely hotel approval, FEMA officials said that they received a large number of requests for free lodging. As a result, they established (1) a separate fax line to accept verification documentation and (2) an auto-dial system to inform people they were approved to check into a hotel. However, according to FEMA, there were problems with the auto-dial system, and therefore some individuals were not promptly informed that they were eligible for housing assistance. This investigation shows that FEMA has made significant progress in addressing the challenge of providing urgent disaster relief to individuals and communities in need of assistance, while simultaneously safeguarding its programs from fraud and abuse. By improving controls over IHP, FEMA has taken steps to provide reasonable assurance that fraud and abuse in this program is minimized. Given that the current hurricane season has begun, FEMA should incorporate lessons learned from our investigation to continue to improve its fraud-prevention program and address all of the customer-service issues we identified. We recommend that the Secretary of Homeland Security direct the Administrator of FEMA to take the following two actions: Establish random checks to assess the validity of supporting documentation submitted by applicants to verify identity and address. Assess the customer-service findings from this investigation and make improvements for future hurricane seasons in areas such as contractor readiness. In written comments on a draft of this report, the Department of Homeland Security concurred with and agreed to implement both of our recommendations. We are sending copies of this report to the Secretary of Homeland Security, the FEMA Administrator, and interested committees. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-6722 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report.
GAO's previous work on Hurricanes Katrina and Rita identified fraud, waste, and abuse resulting from a lack of fraud-prevention controls within the Federal Emergency Management Agency's (FEMA) assistance programs. For example, FEMA did not verify the identities or addresses of individuals applying for aid under its Individuals and Households Program (IHP). FEMA also did not verify the eligibility of individuals seeking shelter in FEMA-paid-for hotels and made duplicate payments to individuals who applied multiple times. GAO made numerous recommendations designed to improve these controls. To follow up on this work, GAO conducted undercover tests of the IHP process during the response to Hurricanes Gustav and Ike. This report discusses (1) whether FEMA's controls have improved since Katrina and Rita and (2) issues GAO identified related to the customer service that FEMA provided. GAO submitted bogus applications for disaster assistance, met with FEMA officials, and contacted actual disaster victims to determine their experiences applying for aid. FEMA has significantly improved its fraud prevention controls over disaster assistance. For example, FEMA now conducts identity and address verification on all applications and requires inspections prior to approving rental assistance. In addition, FEMA requires individuals in need of housing assistance to provide valid registration numbers before checking into FEMA-paid-for hotels. FEMA has also taken steps to flag and cancel duplicate registrations for the same disaster. These improvements made it more difficult for GAO to penetrate IHP controls for Hurricanes Gustav and Ike--only 1 of 10 fraudulent applications submitted by GAO received cash payments. However, GAO found flaws in FEMA's controls that still leave the government vulnerable to fraud, waste, and abuse. GAO's undercover tests show that a persistent fraudster can bypass many of these controls by submitting fabricated documents to prove identity or address and, as a result, obtain housing assistance. GAO also received duplicate payments for bogus hotel expenses. In addition, FEMA failed to properly inspect a bogus address GAO used to apply for assistance, ultimately sending GAO multiple checks for thousands of dollars in rental assistance. GAO observed several problems with FEMA's customer service, which made it difficult for many real victims to apply for assistance or obtain shelter in a timely fashion. For example, one of GAO's investigators called nine times over the course of 3 days--several times being put on hold for 20 minutes----before being connected to an operator. Other investigators received incorrect information about the application process. Actual disaster victims confirmed these problems. One applicant reported having to call FEMA at 4 a.m. in order to reach an operator. FEMA cited several factors that contributed to this poor service, including a higher-than-expected call volume and an inability to meet projected call center staffing needs because a contractor failed to provide adequate staffing. Despite these issues, FEMA told GAO that it has made few changes in preparation for the 2009 hurricane season.
The United States has a long history of military research and development. To help conduct and manage this research, DOD has a diverse network of 80 in-house laboratories and 26 test centers. Their missions range from basic scientific research to direct technical support to operational commands. The management, operations, and funding for these disparate laboratories and test centers also vary among the services. Over the past decade, several organizations, panels, and commissions have identified significant personnel and resource problems facing the laboratories and test centers. For example, several studies found that the laboratories needed more flexibility in personnel rules governing the scientific workforce in order to attract and retain staff. Similarly, several recent studies identified problems with declines in investment and infrastructure, resulting in outdated facilities and technical equipment. To help the laboratories and test centers with these problems, the Congress enacted legislation in fiscal years 1999 and 2000 establishing pilot programs for laboratories and test centers to propose innovative partnerships, business-like practices, and human capital initiatives. The 1999 pilot program focused on partnerships and business-like practices, while the 2000 program focused more on human capital initiatives. Together, the two pilot programs authorized the Secretary of Defense to provide one laboratory and one test center in each service the authority to explore innovative methods for partnering with universities and private sector entities to conduct defense research and development; attract a workforce balance between permanent and temporary personnel and with an appropriate skill and experience level; develop or expand innovative methods of operation that provide more defense research for the dollar; and waive any restrictions on these methods that are not required by law. A total of 10 laboratories and test centers from all 3 services participated in the pilot programs. They are listed in appendix I. Both programs were authorized for 3 years. The 1999 pilot expired in March 2002; the 2000 pilot, in March 2003. For both programs, DOD was required to submit preliminary and final reports to the Congress on program activities. The preliminary report for the 1999 program was submitted in July 1999. However, as of the date of this report, the three other reports have not been submitted. In fiscal year 2003, the Congress authorized another 3-year pilot program and extended the 1999 and 2000 pilot programs until 2005. Under the new 2003 pilot program, the Secretary of Defense is to provide one laboratory and one test center in each service the authority to use innovative personnel management methods to ensure that the participants can employ and retain an appropriately balanced workforce, and effectively shape the workforce to fulfill the organization mission; develop or expand innovative methods of using cooperative agreements with private sector and educational organizations to promote the technological industrial base for critical defense technologies and facilitate the training of a future scientific and technical workforce; and waive any restrictions not required by law. As of May 2003, DOD had not identified any participants for the 2003 pilot program. The 2003 legislation also requires DOD to issue three reports, including a January 2003 report on its experience with the 1999 and 2000 pilot programs, barriers to implementation of these programs, and proposed solutions to overcome these barriers. According to DOD officials, this report has been drafted, but as of May 2003, it had not been submitted to the Congress. Since the inception of the pilot programs in 1999, 178 initiatives have been proposed, but only 4—or 2 percent—have been implemented under the pilot programs. Participating laboratories and test centers proposed initiatives covering a variety of areas, including business-like practices, partnerships with industry and academia, and human capital innovations. We found that laboratories focused many of their proposals on human capital innovations, while test centers tended to concentrate on business-like practices and partnerships. Over the course of the 1999 and 2000 pilot programs, the laboratories and test centers proposed 178 human capital, business, and partnership initiatives. As shown in table 1, slightly over half of the initiatives dealt with human capital and the remainder dealt with business-like practices and partnerships. Overall, the laboratories proposed substantially more initiatives than did the test centers. Furthermore, the laboratories and test centers focused on different types of initiatives. The laboratories more often proposed human capital initiatives, while the test centers overwhelmingly focused on business and partnership initiatives. Laboratory officials told us that they are especially concerned about attracting top-quality scientists to replace a retiring workforce. Test center officials told us that they are focused on modernizing their infrastructure and developing new methods of sharing the cost of operations. Proposals for business-like practices included many initiatives to streamline or improve local operations. Some initiatives focused on expanding the use of innovative techniques such as other transactions or cooperative agreements. Several other proposals sought the authority to reinvest fees or revenues into facilities revitalization. For example, one Navy laboratory proposed imposing a surcharge for its services and using that revenue to fund capital investments, and an Air Force laboratory proposed using facility construction as a valid in-kind contribution under cooperative agreements. Partnership proposals included initiatives such as collaborative research agreements with Arnold Engineering Development Center and the University of Tennessee Space Institute to create a formal business bond to pursue research in laser-induced surface improvement technology and university flight research. The Army’s Aberdeen Test Center proposed a limited liability company. Under this concept, industry, academia, and government would form a profit-making company to conduct research and testing at the installation. The test center proposed using its share of the profits to reinvest in the infrastructure at Aberdeen. Several human capital initiatives focused on recruiting and retention flexibilities as well as additional voluntary separation incentives. These proposals included initiatives to streamline hiring of experts and consultants; accelerate promotions for scientists and engineers; provide retention bonuses for key scientists; and hire students directly after graduation. Several participants submitted proposals for direct hire authority to allow faster hiring of scientists, and several submitted proposals for voluntary retirement incentives as a mechanism for reshaping the workforce. Almost none of the 178 proposed initiatives were approved and implemented using the pilot programs’ authorities. As figure 1 shows, only 2 percent—or 4 proposals—were implemented under the pilot programs. In contrast, 74 percent were blocked or dropped during the review process or remain on hold awaiting resolution. The four implemented initiatives were donating laboratory equipment directly to local schools, waiving top-level certification of certain service agreements with streamlining cooperative agreements to facilitate collaborative work agreements with outside activities, and granting temporary relief from some mandatory personnel placement reviews. Officials at the laboratories that proposed these initiatives told us that they were considered minor changes with little impact on the larger problems facing the laboratories. Twelve times as many initiatives—24 percent—were implemented using different authorities than the pilot programs. For example, several laboratories requested the authority to appoint retired military members to civilian positions without having to wait the required 180 days. This requirement was waived using a different authority than the pilot programs. Another human capital initiative—to appoint senior scientists from private industry—was authorized by subsequent legislation. In the business/partnership category, the 46th Test Group at Holloman Air Force Base used other authorities to negotiate a complex leasing arrangement with industry to install a radar test facility at White Sands Missile Range. This effort took several years and overcame many contractual and regulatory barriers. In addition, a Navy laboratory streamlined foreign license applications using another authority. The low level of implementation of the proposed initiatives occurred for two primary reasons. First, DOD did not develop an effective process for implementing the pilot programs. Second, DOD determined that proposed human capital initiatives—for example, requests for the authority to hire directly or offer voluntary retirement incentives—were in conflict with statutory provisions. DOD did not provide standardized guidance on proposal requirements or feedback for improving proposals; coordinate or prioritize proposals; or clarify decision-making authority for proposal review and approval. DOD also did not designate a strong focal point to coordinate the pilot programs, advocate process improvements, and provide assistance and advice to participants. The lack of a strong focal point exacerbated other process gaps. According to officials at DOD laboratories, test centers, and headquarters, DOD did not provide standardized guidance on proposal requirements or feedback for improving proposals (or, in many cases, information on the status of proposals submitted for approval). Proposals often lacked specificity and detail. Many were broadly conceptual or generic in nature and lacked a detailed business case that linked their contribution to overall objectives for the pilot programs. For example, a proposal to permit scientists to serve in a leadership role in professional societies failed to include details of the problems encountered, and the potential to improve operations. Similarly, several proposals for direct hire authority failed to include a business case to explain what specific needs this authority would address or how it would address them. Lack of specificity and business case detail led to the failure of many initiatives to win approval. DOD attorneys told us that many proposals were so vague that it was impossible to determine whether or not the proposed initiatives could meet legal requirements. At a department level, DOD also did not coordinate or prioritize proposals, thereby precluding decisions on how best to pursue common interests and issues such as direct hiring authority or forming partnerships with universities. Instead, each participant submitted proposals individually, and thus multiple independent proposals were often submitted for the same or similar issues. DOD attorneys pointed out that it would have been more effective to group proposals by common theme and prioritize them. They believed a unified approach and prioritized proposals with clearly written, specific plans for solving well-defined problems would have enabled them to more effectively assist participants with resolving legal issues. DOD did not clarify decision-making authority for proposal review and approval. Many organizations and individuals were stakeholders in proposal review and approval, and they often had differing management structures, concerns, and interests. Stakeholders included military and civilian leaders, attorneys, and human capital and personnel staff at several levels: the local installation where participating laboratories and test centers were housed; the individual service; and OSD. The roles and decision-making authority of the various stakeholders were never negotiated and clarified. As a result, many players at multiple organizational levels had—and took—an opportunity to say “no” to a particular proposal, but it remained unclear who had the authority to say “yes.” For example, some participants believed that the pilot program legislation gave the director of a participating laboratory or test center the authority to approve a proposed initiative. OSD officials, however, believed that the proposed initiatives had to be approved at higher levels. The role of the services was also unclear. Some laboratory and test center directors initially sent proposals directly to OSD’s Directorate of Defense Research and Engineering (DDR&E), bypassing their service headquarters. Others sent proposals to their service headquarters for approval before submitting the proposals to DDR&E. Eventually, however, each of the service headquarters decided to become more heavily involved in the approval process and provide service-level responses to proposals. These service-level responses often came into play after proposals had been sent directly to DDR&E for approval, further complicating the approval process. Within OSD, both DDR&E and Personnel and Readiness (P&R) had substantial stakes in the human capital proposals—DDR&E because it is charged with oversight and management of defense laboratories and P&R because it has the authority within DOD for human capital issues. However, DDR&E and P&R never agreed on a process for approving proposals. In addition, for the past year P&R’s attention has been focused primarily on developing DOD’s proposed new civilian human capital management system, the National Security Personnel System (NSPS), which the Secretary of Defense recently submitted to the Congress. DOD officials believe that, if enacted, NSPS will provide flexibility to make necessary human capital changes. The Undersecretary of Defense P&R directed that implementation of new personnel initiatives be placed on hold during the development of NSPS so that the existing system could be studied to identify needs and best practices. Consequently, P&R officials believed it would be premature for DOD to implement new personnel initiatives during this time. DOD did not designate a strong focal point to coordinate the pilot programs, advocate process improvements, and provide assistance and advice to participants. This exacerbated the other process gaps. Without such a focal point, participants found their own individual ways to develop proposals and get them reviewed. Several officials agreed that a strong focal point would be helpful. For example, DOD attorneys stated that the laboratories or someone acting as their focal point needed to define the issues they wanted to resolve. The attorneys noted that a focal point could have more successfully drawn upon their expertise and experience with addressing legal challenges in other innovative programs (e.g., demonstration projects). Some pilot program participants also agreed a strong focal point was needed, but they had some concerns regarding the amount of influence and authority he or she should have. According to officials at DOD laboratories, test centers, and headquarters, human capital initiatives were generally in conflict with title 5 of the United States Code. Title 5 provides the framework for standard and equitable personnel practices across the federal government and is the current foundation for management of the DOD civilian workforce. Over time, the Office of Personnel Management has added implementing rules and regulations to the framework. Proposed human capital initiatives often sought relief from these provisions, for example, requests for the authority to hire directly or offer voluntary retirement incentives. However, after reviewing the legislation, the DOD Office of General Counsel advised that the 1999 and 2000 legislation did not provide the authority to waive personnel rules based on title 5 provisions. Rather, the office advised that the pilot programs’ authorities allow only for changes that could already be accomplished under existing DOD regulations. In other words, the pilot programs did not provide any new or additional authority to waive existing personnel rules and regulations grounded in title 5. Consequently, absent statutory authority beyond that provided by the pilot programs, human capital proposals in conflict with title 5 and its implementing rules and regulations could not be implemented. Many initiatives fell into this category. The 2003 pilot program faces several implementation challenges. First, as of May 2003, DOD had not addressed implementation problems. Thus, proposals made via the 2003 pilot program will face the same obstacles as previous proposals. Second, human capital initiatives will continue to face title 5 challenges. Like the earlier legislation, the 2003 legislation does not provide DOD any new authority. Hence, initiatives proposed under the 2003 pilot program will encounter the same statutory restrictions as previous initiatives. P&R officials believe that, if implemented, NSPS will provide the flexibility to make necessary human capital changes, thereby eliminating the need for the pilot programs in this area. However, NSPS has not yet been enacted, and if enacted, it will still require an implementation process. Finally, laboratories and test centers may be reluctant to participate in the new pilot program. Many participants in the earlier pilots told us they were discouraged by their experience and consequently unwilling to repeat it. Some expressed frustration with the lack of guidance and feedback on their proposals; others questioned whether management was really committed to the pilot program. Even those few participants that had proposals approved were wary of expending additional resources on another pilot program. While DOD appears to recognize a need to address human capital and business operations issues specific to laboratories and test centers, it has not effectively managed the pilot programs. If DOD intends to use the pilot programs to address laboratory and test center issues, it will have to address the factors—both process and statutory—that blunted previous proposals made through the pilot programs. The small volume of approved proposals, coupled with DOD’s not providing status reports required by the Congress, has left the Congress uninformed about what objectives DOD would like to achieve with the laboratories and test centers, how it plans to achieve those objectives, and what vehicles it plans to use. This information will be important to the success of any future actions. We recommend that by March 31, 2004, the Secretary of Defense inform the Congress of DOD’s objectives regarding human capital and business operations in the laboratories and test centers, how it plans to meet these objectives, and what vehicles it will use to meet them. We also recommend that by March 31, 2004, the Secretary of Defense develop a process for proposing, evaluating, and implementing human capital, business, and partnership initiatives for the laboratories and test centers, regardless whether by the pilot authority or by some other vehicle. Such a process should include instructions for proposal requirements such as linking to overall goals and measurable objectives and the need for a business case, and specification of procedures for proposal submission and review and providing feedback on proposal quality and scope. Finally, we recommend that the Secretary of Defense designate a strong focal point to receive, evaluate, and prioritize all proposals and work with laboratory and test center directors, legal counsel, personnel and other specialists to develop sound and well-developed business cases and strategies to obtain needed changes. In written comments on a draft of this report, DOD states that it does not concur with our recommendations because it has already taken actions that in effect implement them. While the actions DOD cites that it has taken are important to implementing our recommendations, they are not sufficiently specific to address the problems identified in our report. DOD’s written comments are contained in appendix II. Regarding our first recommendation—that DOD inform the Congress of its human capital and business objectives for the laboratories and test centers and the strategies it will employ to meet them—DOD did not concur. DOD discusses various high-level, agencywide initiatives it has taken to address human capital and business issues in general and stated that the Congress has been made aware of these initiatives, obviating the need for additional reporting. We continue to believe that additional reporting is necessary. We recognize that the general initiatives DOD discusses may provide ways of helping the laboratories and test centers; however, to be effective, they must be made specific, that is, developed into targeted strategies and plans that address the particular problems the laboratories and test centers face. DOD has not provided the Congress sufficient details on how the general initiatives will be used to address laboratories’ and test centers’ objectives and problems. Regarding our second recommendation—that DOD develop a process for proposing, evaluating, and implementing human capital and business-like practices initiatives for the laboratories and test centers—DOD did not concur. DOD states that it has already introduced new agencywide management processes—the Business Initiative Council and the submission of the NSPS proposal to the Congress—to address human capital and business issues in general. However, DOD has not detailed how these general initiatives will apply to the laboratories and test centers or address our process concerns. For example, while the Business Initiative Council may have an effective process for proposing, evaluating, and implementing laboratory and test center business-like practices initiatives, DOD has not provided sufficient information for us to make such a determination. We also recognize that NSPS may address some of the human capital problems faced by the laboratories and test centers, but this system is still under consideration by the Congress. Until it becomes law, we believe it is premature to cite it as an effective management tool. With regard to our third recommendation—that DOD designate a strong focal point to work with the laboratories and test centers to develop, evaluate, prioritize, and coordinate proposed initiatives—DOD did not concur. DOD states that the recently created position of Undersecretary for Laboratories and Basic Sciences has oversight responsibility for all laboratory initiatives and that it is establishing a new Defense Test Resources Management Center that will oversee the test centers. DOD asserts that these two organizations will perform as focal points. However, DOD has not detailed how these organizations will fulfill this role and work with the laboratories and test centers to overcome the many barriers noted in our report. During our review, we met with officials from the following organizations in the Office of the Secretary of Defense: the Director, Defense Research and Engineering; the Director, Operational Test and Evaluation; the General Counsel, and the Deputy Undersecretary of Defense for Personnel and Readiness. We also met with officials from the Army Research Laboratory, Aberdeen Test Center, Army Medical Research and Materiel Command, Naval Research Laboratory, Naval Undersea Warfare Center, Air Force Research Laboratory, Air Force Research Laboratory’s Space Vehicles Directorate, and 46th Test Wing. We also discussed pilot program issues with each participating laboratory or center. To determine the initiatives proposed to date and their status, we obtained records from OSD and service officials. From these records and from discussions with each participant, we compiled a listing of initiatives proposed by each participating laboratory and test center. We verified the listing and the current status of each initiative with cognizant service officials. To determine what obstacles inhibited DOD’s implementation of the pilot programs, we obtained documentation and data from pilot program participants as well as from OSD officials. We also discussed statutory obstacles with the officials from DOD’s Office of General Counsel and Undersecretary of Defense for Personnel and Readiness. We discussed management and procedural obstacles with officials from the Director, Operational Test and Evaluation and Defense Research and Engineering. In addition, we discussed all obstacles with the participating laboratories and test centers. The problems facing the laboratories and test centers have been documented by many organizations, panels, and commissions. We did not independently verify these problems or the findings and conclusions of these entities. We conducted our review from July 2002 to April 2003 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Major contributors to this report were Catherine Baltzell, Arthur Cobb, Christopher Durbin, Rae Ann Sapp, Sylvia Schatz, and Katrina Taylor. If you have any questions regarding this report, please call me at (202) 512-4841.
In fiscal years 1999, 2000, and 2003, the Congress authorized pilot programs to help the Department of Defense (DOD) laboratories and test centers explore innovative business partnerships and human capital strategies. Congressional concerns about DOD's implementation of the pilot programs have been growing. The Congress mandated that GAO review pilot program implementation. GAO (1) identified the pilot initiatives proposed and their current status, (2) examined factors that affected implementation, and (3) assessed implementation challenges the 2003 pilot program faces. The 1999 and 2000 pilot programs have not worked as intended. Since their inception, 178 initiatives have been proposed by the participating laboratories and test centers but only 4--or 2 percent--were implemented under the pilot programs. Participants proposed initiatives covering a variety of areas, including business-like practices, partnerships, and human capital innovations. The pilot programs were not effective because DOD lacked an effective implementation process and proposed human capital initiatives were not consistent with statutory provisions. First, DOD did not provide standardized guidance on proposal requirements, coordinate proposals, or clarify decision-making authority for proposal review and approval. Furthermore, DOD did not designate a strong focal point to provide assistance and advice to participants and advocate process improvements. The lack of a strong focal point exacerbated other process gaps. Second, DOD attorneys advised that the pilot programs did not provide authority to make most of the proposed human capital changes. Implementation of the new 2003 pilot program faces several challenges. First, DOD has not addressed implementation problems. For example, clear guidance is still lacking and decision-making authority is still unclear. Second, the 2003 pilot program provides no change in authority concerning human capital initiatives. Finally, laboratories and test centers may be reluctant to participate. Many participants in the earlier pilots told us they were discouraged by their experience and consequently unwilling to repeat it.
Credit cards are widely used in the United States. Seventy-eight percent of consumers had a credit card in 2008. As of 2009, credit cardholders had more than $800 billion in outstanding debt on roughly 600 million credit cards, according to Federal Reserve estimates. More than 6,000 depository institutions issued credit cards as of 2009. However, as seen in table 1, the great majority of credit cards are concentrated among nine issuers. These issuers accounted for approximately 85 percent of outstanding general purpose credit card balances nationwide in 2010. As of 2010, each of these nine issuers offered debt protection products. Debt protection products suspend or cancel all or part of a consumer’s obligation to repay an outstanding credit card balance when a qualifying event occurs. These events may vary across products but generally include disability or death of the cardholder and may include events such as unemployment. Depending on the product’s terms and conditions, a qualifying event may trigger cancellation of the total balance or the minimum monthly payment, or it may simply suspend the minimum monthly payment for a period of time. Debt protection products are banking products that are directly sold by credit card issuers to consumers who hold a credit card with them. The issuer charges fees for the debt protection product, typically on a monthly basis. Consumers may buy the product when they apply for a new credit card or can add it to an existing credit card account. New credit card applications often contain a box that consumers can initial or insert a checkmark in if they want debt protection, and existing account holders can typically purchase the product by telephone, mail, or through the issuer’s Web site. Because most major credit card issuers are structured as depository institutions, federal banking regulators oversee their activities, including those related to debt protection products. As the national bank regulator, OCC oversees seven of the largest nine issuers offering debt protection products––Citibank (South Dakota), N.A.; Bank of America; Chase Bank USA, N.A.; Capital One; HSBC; Wells Fargo Bank, N.A.; and U.S. Bancorp. FDIC oversees Discover, which operates as a state-chartered bank. American Express has two bank subsidiaries that offer debt protection products to consumers––American Express Centurion Bank, which is a state-chartered bank and is therefore regulated by FDIC, and American Express Bank, FSB, which is a federal savings association and is therefore regulated by OTS. Public information about the debt protection product industry is relatively scarce. Credit card issuers are not required to report information about these products in Call Reports and Thrift Financial Reports, which serve as the primary publicly available sources of financial information regarding the status of the U.S. banking system. Credit insurance is insurance coverage sold in connection with a loan, credit agreement, or credit card account. Credit insurance products typically bundle together several individual forms of credit insurance, such as credit life, credit disability, and credit involuntary unemployment insurance. Unlike debt protection products, which are two-party arrangements between a credit card issuer and a consumer, credit insurance is a three-party arrangement involving an insurance company, a credit card issuer, and a consumer. An insurance company generally sells credit insurance as a group policy to the credit card company, which in turn offers the product to its cardholders. A cardholder who enrolls in credit insurance typically receives a certificate of insurance, which provides evidence of coverage, rather than an insurance policy. The consumer typically pays monthly premiums to the insurance company, and if a covered event occurs, the insurance company takes over the consumer’s credit card payments for a specific period of time, or if the cardholder dies, pays part or all of the outstanding credit card balance. Like other forms of insurance, credit insurance is primarily overseen by state insurance regulators, and regulations governing it may differ across states. In recent years, debt protection products sold in conjunction with credit cards have largely displaced credit card credit insurance. The two products tend to offer consumers the same benefits, however, canceling or taking over credit card payments during qualified events such as disability. Ten years ago, the largest credit card issuers rarely offered debt protection products and instead offered credit insurance, but today most issuers sell primarily debt protection products and rarely offer credit insurance to new customers. In 2009, cardholders paid approximately $2.4 billion in fees for debt protection products, according to data from the nine largest credit card issuers. The products were associated with approximately 24 million credit card accounts with an estimated $42 billion in outstanding debt. Overall, about 7 percent of the nine issuers’ credit card accounts were covered by debt protection products. In 2009, consumers bought approximately 6 million new debt protection products, 73 percent of them for existing credit card accounts and 27 percent for newly opened accounts. The three insurance companies that provided us with data on credit insurance represented about 30 percent of the open-end credit insurance market and maintained approximately 2.7 million credit card credit insurance packages in 2009. These three companies reported to us that they sold 44,114 new packages in 2009—about 1 percent of the total. All other packages were originally sold in earlier years. The three insurers told us that their earned premiums for credit insurance had declined from $757 million to $186 million, or by 75 percent, between 2001 and 2009. Credit card issuers have shifted from credit insurance to debt protection products largely as a result of differences in the way the two products are regulated. Federal regulations for debt protection products apply nationwide, while state laws governing credit insurance can differ across states. According to representatives of credit card issuers, the credit insurance industry, some consumer organizations, and two government regulatory agencies, federal regulation allows for the following: Uniform regulation and marketing efficiency. Federal regulations for debt protection products apply nationwide, while credit insurance, like other insurance products, can be subject to different state regulatory regimes. As a result, one debt protection product can be offered nationwide, which issuers’ representatives told us allows the issuers to offer uniform pricing, terms, and conditions. In addition, the representatives told us that issuers can offer their products to consumers through multiple marketing channels more efficiently than they could for credit insurance. Flexibility. Issuers can generally structure debt protection products more easily, consistently, and quickly than they can state-regulated credit insurance, and can offer a broader array of products. Issuer representatives cited the desire for more flexible products that meet cardholder needs as a reason for their decision to shift from credit insurance to debt protection products. Potentially higher earnings. Representatives of regulators and one consumer group noted that debt protection products offer more potential for earnings than credit insurance. This may be in part because of the absence of price controls that states generally impose on credit insurance rates and the nonuniformity of state regulation. In addition, because credit card issuers sell their debt protection products directly to cardholders, they do not have to share product earnings with an insurance company and can retain more of the fees. Debt protection products cancel or suspend some or all of a cardholder’s debt after the occurrence of certain qualifying events (see fig. 1). All of the nine largest issuers’ primary debt protection products include a cancellation benefit, and four of these products also include a suspension benefit: Cancellation benefits forgive some or all of a cardholder’s debt. These benefits may cancel the total credit card balance if the cardholder dies or may cancel the minimum monthly payment for a specific time during a period of unemployment, for example. As a result, debt cancellation benefits reduce the cardholder’s account balance by the amount of debt being canceled. Suspension benefits allow a cardholder to skip the minimum monthly payment without penalty and without accruing interest for a specified time period. Debt suspension does not reduce the cardholder’s account balance. These cancellation and suspension benefits are triggered by certain events. Benefits vary among products, with most debt protection products covering loss of life, disability, involuntary unemployment, and leave of absence from employment. Some products also cover other events, such as the birth or adoption of a child, marriage, relocation, divorce, hospitalization, call to active U.S. military duty, retirement, loss of a spouse or child, or natural disaster. At least one issuer also includes an emergency payment benefit, which cancels the minimum monthly payment once per year for any reason. Another issuer includes a benefit that allows cardholders to suspend one monthly payment per year in months that include specific federal holidays. Each product offered by the nine issuers covers a different number of events, ranging from 4 to 21 events. Some products allow benefits to be triggered by events affecting individuals other than the primary cardholder, such as the cardholder’s spouse or domestic partner, other authorized users of the card, or the highest wage earner in the cardholder’s household. For example, benefits could be triggered by the involuntary unemployment of the cardholder’s spouse. Debt protection product fees are generally charged monthly and are based on the cardholder’s outstanding balance. Fees for the nine largest credit card issuers’ debt protection products range from $0.85 to $1.35 per month for every $100 of the outstanding balance. For example, if the product fee was $0.90 per $100 of the outstanding balance, a cardholder with an outstanding balance of $500 in a given month would pay $4.50 ($0.90 x $500/$100) for the debt protection product that month. Because the fee depends on the card balance, the fee for the product can vary from month to month (see fig. 2). The debt protection product fee is charged whether or not the cardholder pays the card balance in full, but accounts with a zero balance are not charged a fee. As seen in figure 3, debt protection product fees appear as itemized charges on monthly credit card statements and are added to the new balance due each month. Debt protection product fees are typically identified in the account statement using the issuer’s branded product name in a transaction line item listed in the section labeled “fees.” The amount of the fee is one component of the “fees/interest charges” category that appears in a credit card statement. PAYMENTS, CREDITS, & ADJUSTMENTS FOR ACCOUNT xxxx-xxxx-5555 $9.$9. INTEREST CHARGED ON CURRENT BALANCE $1.$1. Cardholders who experience a triggering event can request benefits by informing the issuer and submitting any necessary information or documentation. For example, cardholders experiencing involuntary unemployment may be required to submit evidence that they are registered for state unemployment benefits. According to data from the nine largest issuers, approximately 70 percent of all benefit requests were approved in 2009, while about 24 percent were denied. More than half of these denials occurred because the cardholder did not provide adequate documentation of the triggering event. The remainder of requests were still pending at the end of 2009. Issuers sometimes contract with a third- party administrator to manage their debt protection programs, and in these cases the administrators interact with cardholders to approve and process benefits. The terms and conditions of debt protection products include various eligibility requirements and may include certain exclusions or restrictions, which may differ based on the triggering event. For example, some products restrict hospitalization or disability benefits for customers with preexisting health conditions. They may also exclude from unemployment coverage cardholders who are employed part time or seasonally. None of the debt protection products that we reviewed had maximum age limits. A few debt protection products require a general waiting period, such as 30 days after enrollment, before customers can request any type of benefit. Some triggering events may also have specific waiting periods—for example, a cardholder may need to be unemployed for 30 days before applying for an unemployment benefit, although the benefit may be applied retroactively. Debt protection products typically allow one benefit per billing period, may limit the number of triggering events per year, and may impose waiting times between benefits for similar events. Some debt protection products that we reviewed placed a cap on the total dollar amount cardholders can receive per benefit—from $500 to $25,000. For example, three of the largest nine issuers limited their loss-of-life balance cancellation benefit to $10,000 and three limited it to $25,000; the remaining three had no limit. The products may also place caps on the duration of the benefit. For example, one issuer’s product suspends payments for up to 24 billing periods for involuntary unemployment and temporary disability and for up to 1 billing period for other events, such as marriage. Three issuers restricted how much cardholders could charge and the type of transactions they could make during benefit periods, but the other six large issuers did not. For example, one issuer limited available credit to $1,500 and prohibited cash advances and balance transfers when cardholders were receiving debt suspension benefits. Credit card issuers market debt protection products to individuals applying for new credit cards, as well as to existing cardholders. According to representatives of the largest credit card issuers and our analysis of their marketing materials, issuers generally do not target specific demographic groups when marketing these products but advertise them broadly to all new and existing cardholders. Issuers indicated that they sometimes focus marketing efforts on cardholders with certain characteristics that might make them more likely to enroll in the product, such as consumers who routinely carry a balance. The characteristics of cardholders who enrolled in debt protection products in 2009 were similar to those of cardholders in general, according to issuer representatives. Credit card issuers promote debt protection products in a variety of ways, according to issuer representatives and aggregate data provided to us by the nine largest issuers. Customer service representatives responding to inquiries or requests via issuers’ toll-free telephone numbers often also promote ancillary credit card products, and such calls accounted for nearly half of the nine largest issuers’ debt protection product sales in 2009. Most issuers also market debt protection products at bank branches, through telemarketing, and via direct mail, methods that collectively accounted for more than 40 percent of product enrollments that year. Telemarketing calls can be conducted by the issuers themselves or by third-party contractors. Mail marketing can include mailings aimed solely at marketing debt protection products or promotional inserts included with cardholders’ statements or new credit cards. Internet marketing accounted for another 4 percent of product sales in 2009, according to issuer data. Our review of marketing materials found that they typically highlighted the products’ ability to potentially protect a cardholder’s credit rating and provide relief during life-changing events. Some issuers also offered a gift card or cash-back certificate to customers as an incentive for enrolling in debt protection products. Purchasers receive a packet of product information, known as a welcome or fulfillment kit, which usually includes a letter to the consumer, the product’s terms and conditions, instructions on how to request benefits, and cancellation information stating that cardholders can cancel the debt protection product at any time. New enrollees have at least 30 days to review the product information mailed to them and cancel for a full refund. Credit card credit insurance and debt protection products are largely similar from the perspective of the consumer, although, as discussed later in this report, the two products are regulated differently. Both products cover similar events, offer like benefits for consumers, and assess fees in a similar manner, monthly based on account balance. The three insurance companies that provided us with data reported that the majority of the credit insurance packages in 2009 included credit life, disability, and involuntary unemployment insurance coverage (94, 91, and 95 percent, respectively), and 17 percent also covered credit leave of absence. With credit insurance, the insurance company makes the cardholder’s monthly payments or pays off the entire balance. Premiums for credit insurance, like fees for debt protection products, are assessed monthly based on the outstanding balance and appear as a separate line item on cardholders’ monthly statements. Consumers covered by either debt protection products or credit insurance must meet certain eligibility requirements and may be excluded from coverage under specific conditions. As with debt protection products, credit insurance products typically provide a review period that allows the cardholder to cancel with a full refund, and cardholders can cancel at any time. Further, the processes for making a claim for credit insurance and for requesting a benefit from a debt protection product are generally similar. Credit insurance and debt protection products do differ in some respects. First, debt protection products are nationwide products with uniform pricing, terms, and conditions, while credit insurance products vary across states because of differences in insurance regulation among states. Second, credit insurance does not cover certain events that debt protection products may allow, such as marriage, relocation, or birth of a child. Third, credit insurance products only include debt cancellation, whereas some debt protection products include both debt cancellation and suspension. Fourth, credit insurers may restrict coverage for cardholders who are over a certain age, in accordance with state regulations, whereas few, if any, debt protection products have age limitations. Finally, the disclosures for the two types of products differ as a result of differing regulatory requirements. Although no federal law governs debt protection products specifically, they are subject to federal regulation and are primarily overseen by the federal banking regulators. In contrast, credit insurance, like most insurance products, is generally regulated at the state level, and is primarily overseen by state regulators. The generally applicable federal law that pertains to debt protection products is the Truth in Lending Act (TILA), which covers the extension of consumer credit. The Federal Reserve, under TILA, is responsible for prescribing regulations relating to the disclosure of terms and conditions of consumer credit, including those applicable to credit cards and ancillary credit card products such as debt protection products. The regulation that implements TILA’s requirements is the Federal Reserve’s Regulation Z, several provisions of which apply to debt protection products. Regulation Z includes disclosure requirements for several types of loan products, including credit card debt protection products, and all creditors must comply with these requirements. The five federal banking regulators assess the institutions they supervise for compliance with Regulation Z’s disclosure requirements. According to Federal Reserve staff, Regulation Z focuses on ensuring that debt protection product disclosures are clear and understandable to consumers. For voluntary debt protection products, creditors must disclose in writing that the protection is optional; disclose in writing the fee for the initial term of coverage, and thereafter on the periodic statement; explain, if the product includes debt suspension benefits, that interest will continue to accrue during the suspension period; and obtain the consumer’s initials or signature on a written affirmative request for the product after providing the required disclosures. The regulation does not require that fees for voluntary debt protection products be included with the credit card application or account-opening documents, although some issuers do include fee information within these documents. The regulation permits telephone sales of credit card debt protection products. Oral disclosures are permitted for telephone purchases, but a written disclosure must be mailed within 3 business days after the product is purchased. For telephone sales, the creditor must maintain evidence that the consumer affirmatively elected to purchase the product after the disclosures were provided orally, so credit card issuers typically record telephone purchases. In September 2010, the Federal Reserve proposed several revisions to Regulation Z that it said were intended to improve disclosures and help consumers decide whether they can afford a debt protection product. In February 2011, the Federal Reserve announced that it does not expect to finalize the pending rule prior to the transfer of authority for these rulemakings to the new Bureau of Consumer Financial Protection. Under the proposed rule, all disclosures would have to be in 10-point or larger font, grouped together under appropriate headings, and, in some cases, presented in question-and-answer format. The proposed rule also includes model forms and samples for the new disclosure requirements. Additionally, creditors offering voluntary products would be required to determine that consumers met any applicable age or employment criteria before enrolling them in the products. For example, a creditor would not be permitted to enroll a jobless consumer for protection that requires employment as a condition of coverage. Creditors offering voluntary products would also have to disclose the maximum fees charged for the product, although the proposed regulation would not otherwise regulate such fees. For example, according to the model forms, the creditor would have to state as follows: “This product will cost up to (maximum amount per month) if you borrow the entire credit limit. The cost depends on your balance and interest rate.” OCC has a rule specific to debt protection products that applies to all national banks. The rule establishes standards governing debt protection products and seeks to ensure appropriate consumer protections. It includes disclosure requirements that supplement Regulation Z’s requirements. These include mandatory “short-form” disclosures––which may be provided orally at the time of solicitation, including telephone sales––and “long-form” disclosures, which are generally provided in writing before the consumer completes the purchase. The short-form disclosures must state that consumers will receive additional information before they pay for the product and that certain eligibility requirements, conditions, and exclusions may apply. The long-form disclosures must provide further information regarding these requirements, conditions, and exclusions and state, among other things, that consumers have the right to cancel the product. OCC’s rule includes two samples for the short- and long-form disclosures. OCC’s rule also includes a ban on misleading advertisements or practices. In addition, it prohibits credit card issuers from modifying product features unilaterally unless the modification benefits the consumer without additional charge, or the consumer is allowed to cancel the product without a penalty. The rule also prohibits national banks from tying the approval of an extension of credit to the consumer’s purchase of a debt protection product. That is, a national bank cannot make its approval of a credit card application contingent upon a consumer’s purchase of a debt protection product. Other federal laws and regulations may apply to debt protection products. The Federal Trade Commission Act prohibits unfair or deceptive acts or practices—for example, engaging in deceptive marketing practices. The act applies to financial institutions, and federal banking regulators have the authority to issue and enforce additional rules of their own. Under title X of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act), the primary rulemaking authority and some enforcement authority will shift from the federal banking regulators to the newly created Bureau of Consumer Financial Protection. Some of these authorities are newly created by the Dodd-Frank Act, while others are to be transferred from other federal regulators to the new bureau. For example, the Dodd-Frank Act transfers to the bureau the rulemaking authority for TILA. Additionally, the new bureau will be the primary rulemaker, supervisor, and enforcer of consumer protection laws and regulations for depository institutions with more than $10 billion in assets. As a result, the Bureau of Consumer Financial Protection will have a role in overseeing credit cards and their ancillary products, including debt protection products. The date for transferring consumer protection functions to the new bureau is July 21, 2011. Federal banking regulators told us that their examinations could include, as necessary, a review of an institution’s debt protection products. The regulators said that such a review could be triggered by, among other things, consumer complaints. In addition, OCC’s examination procedures note that OCC examiners should review debt protection products if the volume of such products is significant or has grown substantially. According to the federal banking regulators, between 2006 and 2010 approximately 24 bank examinations included specific reviews of institutions’ credit card debt protection products—23 by OCC and 1 by the Federal Reserve. OCC is the only federal banking regulator that has supplemental examination procedures specific to debt protection products, and these procedures focus on compliance with OCC’s rule about these products. The other federal banking regulators’ examinations include procedures for a review of the products under Regulation Z and the Federal Trade Commission Act. The primary focus of federal bank examiners’ reviews of debt protection products is ensuring that the products comply with disclosure requirements and that no unfair or deceptive acts or practices are being used to offer or market them. OCC’s supplemental examination procedures direct examiners to also review the products’ features and terms and conditions and the accuracy of the issuers’ marketing materials. For example, OCC examinations may review telemarketing scripts to determine whether they are fair, objective, and free of undue pressure. The examinations also seek to determine whether the institution may be engaging in prohibited practices, such as requiring consumers to purchase the products. Further, OCC examiners review the adequacy of issuers’ internal policies and processes for offering and administering the debt protection products to consumers. Examiners may sample canceled accounts to ensure that banks correctly follow their own internal policies in refunding fees to consumers. In addition, examiners look at any potential impact of the products on institutional safety and soundness, including whether issuers maintain adequate reserves to cover potential losses associated with benefit payouts. The examiners evaluate the accounting and profitability of debt protection products compared with the banks’ total income to evaluate the products’ income sustainability in view of program volume, number of benefit requests, and cancellation rates. Federal regulators’ reviews have generally not addressed the reasonableness of the pricing of debt protection products, although we did identify two cases in which regulators commented on the price. In one case, the regulator noted that debt protection product fees appeared high and recommended that the bank continue reviewing the appropriateness of the fees it charged. Because no regulatory guidance existed on the appropriateness of prices, no formal violations could be cited. In the second case, the regulator noted that the debt protection products’ payout rate to consumers was low compared with the fees collected, but no formal violation was cited. Banking regulators noted to us that no laws or regulations set the price of debt protection products or govern the costs relative to the benefits for these products. The regulators said that for this reason their examinations of these products focused on compliance with applicable laws and regulations, such as those related to disclosure requirements, and did not address the costs and benefits of the products from a consumer’s perspective. Our review of 24 completed examinations of debt protection products confirmed that the products’ price was generally considered in relation to safety and soundness issues. The Dodd-Frank Act requires that the new Bureau of Consumer Financial Protection’s disclosure rules shall contemplate consumer awareness and understanding of, and the risks, costs, and benefits of, financial products and services. Also under the Dodd-Frank Act, the bureau may find a practice to be unfair under the conditions set forth in the act. The increased popularity of debt protection products raises the importance of effective regulatory oversight of these products. As an insurance product, credit insurance that is offered in connection with credit cards is largely regulated under state insurance law, as shown in table 2. As with other types of insurance, state insurance regulators generally approve credit insurance products and premium rates and examine insurance companies’ financial solvency and market conduct. Because state laws and regulations governing credit insurance differ, the products vary across states. Most states include chapters in their insurance codes devoted specifically to credit insurance. Many states have adopted the Consumer Credit Insurance Model Act and the Consumer Credit Insurance Model Regulation, both of which were initially adopted by NAIC in 1958 and 1973, respectively. Additionally, according to NAIC, state laws about disclosure requirements and laws prohibiting unfair or deceptive acts or practices concerning insurance apply to credit insurance. For example, many states have adopted some version of an “unfair trade practices act” that addresses marketing abuses involved in the sale of insurance products, including credit insurance. State insurance regulators may carry out examinations to investigate complaints or review insurance company practices, including credit insurance practices. As with other insurance products, attorneys general may take action in cases where insurance companies violate state laws and regulations regarding credit insurance. Although credit insurance is primarily regulated at the state level, federal laws and regulations also can apply. Creditors offering credit insurance must comply with applicable federal regulations, such as Regulation Z. In addition, the Federal Trade Commission Act’s prohibition of unfair or deceptive acts or practices also can apply to credit insurance. Further, as part of their examination and oversight activities, federal banking regulators can review credit insurance products that an institution may offer in connection with credit cards. In contrast to requirements for debt protection products, state insurance regulations establish a reasonable relationship between the premiums that consumers pay and the benefits they receive and govern the design and structure of the products. For instance, states set the premium rates by law or regulation that insurance companies can charge for credit insurance. Additionally, states can establish limits on components of premium rates, such as compensation that insurers may pay third parties, including credit card issuers, in exchange for services related to credit insurance. Further, some states establish a loss ratio—that is, the ratio of benefits paid out divided by premiums collected. NAIC’s Consumer Credit Insurance Model Regulation specifies that benefits provided must be reasonable in relation to the premiums charged and notes that the requirement is met if the loss ratio is 60 percent or more. Because states set rates, price components differently, and establish loss ratios, the premiums consumers pay for credit insurance vary depending on their state of residence. Additionally, NAIC’s model regulation states that companies offering credit insurance are required to submit “experience reports” documenting written and earned premiums. In contrast, federal agencies do not routinely require credit card issuers to report detailed information about debt protection products. In regulating credit insurance, some states take into account the potential for a concept that has been referred to as “reverse competition.” With credit insurance, the credit card issuer, rather than the consumer, selects the insurance company providing the insurance. The credit card company receives a commission from the insurance company that may be based in part on the premiums that consumers pay. According to representatives from NAIC, the New York State Insurance Department, and three consumer organizations, the result of this is that credit card issuers may have an incentive to select insurance companies that charge consumers higher prices for credit insurance in order to earn larger commissions. Representatives of the credit insurance industry told us that they believe that the concept of “reverse competition” is speculative and is not a factor in a credit card issuer’s selection of a carrier for credit insurance. Debt protection products and credit insurance may provide several advantages, including protection of cardholders’ credit ratings and peace of mind. Few complaints have been reported about these products, although federal regulators have identified some areas of concern. However, fees for the products can be substantial in relation to the aggregate financial benefits consumers receive, and consumers may have trouble evaluating different products and deciding whether the product is best for them. Debt protection products and credit insurance may offer several advantages for cardholders seeking to manage the risk associated with credit card debt, according to credit card issuers, insurance companies, and some government agencies. The potential advantages of these products include the following: Credit rating protection. Missing credit card payments, making payments late, or otherwise becoming delinquent on credit card debt can damage consumers’ credit ratings. Because debt protection products and credit insurance may cover payments that consumers might not otherwise make, these products can help avoid negative impact on their credit ratings. Peace of mind. The product may provide cardholders with a sense of security and comfort because they know that the product can protect them or their next of kin in the event of certain hardships, disability, or death. Even if cardholders never experience a protected event, they may value the security and peace of mind the product can provide. Ease of purchase. Debt protection products are easy for consumers to purchase. For example, consumers can typically purchase them when applying for a new credit card, with no separate application process. Existing cardholders can readily enroll by telephone or via the issuer’s Web site. In contrast, purchasing a traditional term life or disability insurance policy entails a more detailed application process, often including a medical examination for large amounts of coverage. Availability to most cardholders. Debt protection products are generally available to all consumers holding credit cards, according to industry representatives. Credit card issuers generally do not exclude consumers from purchasing these products based on their credit history, age, health, or other criteria. Coverage of events not available in other products. Many credit card- related debt protection and some credit insurance products cover events for which coverage is not available through traditional insurance products—for example, involuntary unemployment, hospitalization, military duty, and life events such as marriage, divorce, or birth or adoption of a child. A single debt protection or credit insurance product offers benefits for several events, while some traditional insurance products, such as life and disability, protect only against one type of event. Coverage for small amounts. Debt protection products and credit insurance cover a cardholder’s credit card balance, no matter how small. Cardholder balances in 2009 averaged about $2,500, and many were much smaller. In contrast, term life insurance is often not available for coverage of less than $25,000. In addition, the amount a consumer pays in fees for debt protection products or premiums for credit insurance directly corresponds to the credit card account outstanding balance and adjusts with the balance. Representatives of a few credit card issuers provided us with the results of consumer feedback surveys, which the representatives said indicated that consumers appeared to be satisfied with these products. For example, one issuer told us that customer feedback surveys indicated a satisfaction rate for these products of more than 80 percent. This rate climbed to 90 percent for cardholders who had received a benefit. Another issuer said that, in commenting on their satisfaction with these products, consumers often cited the credit rating protection and peace of mind the products can provide. Federal agencies have received relatively few complaints related to debt protection products. As shown in table 3, FDIC, Federal Reserve, FTC, OCC, and OTS collectively received 245 consumer complaints related to credit card debt protection products in 2009. This figure represents approximately 1 complaint for every 100,000 of these products that consumers held and approximately 0.3 percent of the complaints about credit cards in general that the agencies received that year. Most of the complaints asserted either that the consumer had not knowingly enrolled in the product or that requests for benefits had been denied. Credit card issuers, which track consumer complaints they receive, also reported receiving relatively few complaints about debt protection products. According to the aggregated data we received from the nine largest issuers, in 2009 the issuers received 2,045 complaints about debt protection products out of the roughly 24 million accounts with these products. About 40 percent of these complaints were from customers who claimed they had not knowingly enrolled in the product and 29 percent related to denial of benefits; the remaining complaints related to a variety of other issues. The three insurance companies that provided us with data reported 361 complaints related to credit insurance for credit cards out of 2.7 million accounts with this type of insurance. Thirty-four percent of these complaints were classified as “affordability/does not want to pay fee,” 14 percent as “claim unapproved,” 7 percent as “customer stated/claimed they were unaware of product terms/conditions,” and the remaining 45 percent of complaints were related to other issues. While consumer complaint data can be a useful tool for assessing the extent of problems, these data also have limitations because consumers may not always know how to report complaints, the complaints related may not always be properly recorded, and some complaints may not be valid. Federal banking regulators identified relatively few violations related to debt protection products in recent years, none of which resulted in a formal enforcement action. Among the 24 bank examinations conducted by federal banking regulators between 2006 and 2010 that included reviews of debt protection products, three formal violations involving two banks were reported. Two violations were related to inadequate disclosures. One involved a violation of the Federal Trade Commission Act’s prohibition of unfair or deceptive acts or practices. There, rather than automatically refunding fees to consumers who canceled the product within their 30-day trial period, the issuer required the customer to request the refund—a practice that contradicted the process set forth in the long-form disclosures and that was deemed to be deceptive. In each of these three cases, the bank was required to take action to remediate these violations. Regulators also have taken some informal enforcement actions after identifying areas of concern related to credit card debt protection products. For example, a federal banking regulator noted that consumers complained that they were unaware they had purchased a debt protection product, and the bank’s files did not always properly document consumers’ authorization to purchase the product. The bank took action by changing its telemarketing scripts and training materials to ensure that consumers authorized the purchases. Our review of the 24 bank examinations that had addressed debt protection products did not find evidence that issuers engaged in predatory practices with regard to these products. While there is no universally accepted definition, “predatory” practices typically characterize a range of activities, including deception, fraud, or manipulation. Predatory practices also often involve targeting particular vulnerable demographic groups. As noted earlier, issuers market these products to all cardholders and did not target specific demographic groups. The fees that major credit card issuers charge for debt protection products can be substantial. Fees for the primary debt protection product of the nine largest issuers ranged from $0.85 to $1.35 per month for every $100 in outstanding balance, with a median fee of $0.89. With this median fee, a cardholder would pay, on an annual basis, more than 10 percent of his or her average monthly balance in fees for the product. According to the aggregated data we received from these issuers, the average monthly fee paid for a debt protection product in 2009 among cardholders with a nonzero balance was $16.49, and the median fee was $9.27. This extrapolates into average fees of about $200 annually in 2009 for debt protection products. In general, credit unions charge significantly lower fees than banks for these products. CUNA Mutual, which administers debt protection products for many credit unions, provided us with aggregate data for 179 credit unions, which represent approximately 51 percent of the credit unions offering these products. These data show that the credit unions charged fees of between $0.30 and $0.67 per month per $100 in outstanding balance in 2010, with a median fee of $0.45—about half the median fee of $0.89 charged by the nine banks we reviewed. The credit union debt protection products were very similar, although not identical, to the products offered by banks, typically including coverage for loss of life and disability, often including coverage for involuntary unemployment, and sometimes including coverage for leave of absence. Several credit union industry representatives we spoke with said that because credit unions are nonprofit entities, their prices are not set at levels intended to maximize profits. Representatives of the large banks told us that credit unions’ debt protection product prices were lower because credit unions have different business models, tax obligations, and customer bases than banks. In addition, they noted that banks’ debt protection products may cover more events—such as marriage and hospitalization—and that the terms and conditions of the products might vary. We did not identify comprehensive data on the price of credit card credit insurance. One actuarial expert estimated that credit card credit insurance premiums averaged roughly $0.65 to $0.75 per $100 of the monthly outstanding balance for products that covered loss of life, disability, and involuntary unemployment, which is somewhat lower than the cost of debt protection products. However, this expert told us that comparing the costs of the two products could be problematic because the products were not fully comparable. For instance, the benefits offered could vary, with debt protection products typically covering a wider range of events than credit insurance. In addition, “average” premium rates for credit card credit insurance can be misleading because prices can vary significantly state by state. In the aggregate, a relatively small proportion of the fees consumers pay for debt protection products is returned to them in tangible financial benefits. As seen in figure 4, in 2009 the largest nine issuers reported that they collected $2.4 billion in fees for debt protection products and provided back to consumers $518 million in monetary benefits. Thus, consumers received 21 cents in tangible financial benefits for every dollar paid in fees—that is, a payout ratio of 21 percent. These issuers reported that the administrative costs and reserves associated with these products were $574 million, accounting for 24 cents for every dollar in fees collected. The issuers reported that pretax earnings in 2009 for debt protection products totaled $1.3 billion, or 55 cents of every dollar in fees paid. An estimated 5.3 percent of cardholders with a debt protection product and a nonzero balance received a benefit in 2009. The average direct financial value of this benefit was $607. (55%) (21%) The direct monetary value to a cardholder who does receive a debt protection product benefit can be modest, for a number of reasons: Cancellation of minimum monthly payment. A credit card’s minimum monthly payment is typically between 1 and 2 percent of the outstanding credit card balance. As a result, canceling the minimum payment on a $2,500 balance would save the cardholder only between $25 and $50. Further, the cardholder’s remaining card balance continues to accrue interest during the benefit period. Suspension of minimum monthly payment. Allowing cardholders to skip payments can serve to protect their credit ratings and alleviate a cash flow crisis, but may have limited direct monetary value because it does not pay down any of the cardholder’s balance. Suspension benefits waive accrual of interest during the benefit period, and the monetary value of the benefit in this case varies depending on the cardholder’s balance and interest rate. Duration of benefits. Most benefits have time limitations. For example, benefits triggered by involuntary unemployment are usually limited to between 6 and 24 months, and benefits triggered by life events such as the birth or adoption of a child, or marriage, typically allow suspension or cancellation of between one and four minimum monthly payments. Two issuers’ debt protection products cap the amount of debt that is canceled but do not cap the fee accordingly. For example, one product caps loss-of-life coverage at $10,000, but the fees charged for the product ($0.85 per $100 in outstanding monthly balance) are not similarly capped. As a result, a cardholder with a balance of $20,000 would pay the fee based on that amount even though only $10,000 would be canceled in the event of the cardholder’s death. Four other issuers cap their fees according to the maximum amount of debt that is canceled by the product. Two issuers do not cap benefits or fees and one issuer did not provide us information on whether it caps fees. Further, the “bundling” that is characteristic of debt protection products— wrapping together in one product coverage for multiple events—can result in cardholders purchasing coverage not always applicable or valuable to them. For example, when a cardholder dies and leaves no net assets, the cardholder’s heirs do not automatically become personally liable for any outstanding credit card debt. Thus, the loss-of-life benefit of a debt protection product may be of limited value to cardholders with no net assets. Similarly, cardholders with certain disabilities would not benefit from the disability coverage bundled into a product because of the exclusions in most products’ terms and conditions. Industry representatives told us they believe consumers can benefit from bundled products because they cover some events not covered by other products, are offered to a wider range of customers, and can be priced less than unbundled products because of economies of scale and reduced administrative expenses. Credit card credit insurance typically has lower loss ratios—that is, the benefits paid out to consumers divided by the premiums collected—than more traditional forms of insurance, such as group life or individual disability insurance. In 2009, the aggregate loss ratios for credit card credit life and credit disability insurance were 61 percent and 24 percent, respectively, for the three insurance companies that provided us with data. In contrast, the 2009 aggregate loss ratios for group life insurance and individual disability insurance among U.S. insurance companies overall were 83 percent and 51 percent in 2009, according to SNL Financial and NAIC, respectively. However, there can be significant limitations to making such comparisons. First, credit insurance and other forms of insurance are not fully comparable products because their benefit amounts and coverage terms may differ significantly. Second, the cost of administering these products may vary, and may be proportionally higher for credit insurance, which typically covers relatively small loan amounts. Finally, loss ratios do not incorporate the nonquantifiable benefits of an insurance product, such as peace of mind. Representatives from consumer organizations and some government agencies have advised consumers to consider alternatives to purchasing debt protection products or credit insurance. They note that consumers considering purchasing these products might be better off using the amount they would pay in monthly fees toward paying down their credit card balance, particularly if they are accruing significant interest. Another alternative to paying a debt protection product fee can be to accumulate personal savings that could be used to make credit card payments in the event of job loss or other unforeseen circumstances. Some consumer and insurance experts also advise that, in general, insurance is intended to provide broad financial protection, while these products just cover a single credit card debt. NAIC representatives told us that term life insurance was a more cost-effective way to protect one’s heirs because the cost per unit of coverage for term life insurance is generally much lower than the cost per unit of coverage for debt protection products or credit insurance. Moreover, a consumer can comparison shop among traditional insurance products to seek the best price. In contrast, a consumer holding a specific credit card can purchase a debt protection product only through the issuer of that credit card. Financial markets function best when consumers have information sufficient to understand and assess financial services and products. Yet consumer testing conducted by the Federal Reserve suggests that at least some consumers may be confused about some aspects of debt protection products. In connection with proposed changes to Regulation Z in 2009, the Federal Reserve commissioned a private firm to conduct consumer testing of disclosures for credit insurance and debt protection products offered in connection with home equity lines of credit. Consumers in these testing sessions could not correctly calculate the monthly total fee for the product when given the unit cost per $100 of monthly outstanding balance. In addition, participants were surprised to learn that, in some cases, they might not receive certain benefits because of eligibility requirements and exclusions. Federal Reserve officials told us that although this research was focused on home equity lines of credit, the findings were applicable to credit card-related debt protection and credit insurance products. However, industry representatives have expressed concerns with the small number of consumers polled and the applicability of the research to credit card products. Our analysis of bank examinations that included a review of debt protection products found that regulators did not identify widespread problems related to marketing and disclosure materials. However, we found two cases related to confusing or incomplete disclosures. In one case, the debt protection product marketing materials contained language consumers could wrongly interpret to mean that no fee would be charged when the previous month’s balance was paid in full. In the second case, the bank’s welcome kit information was not sufficiently understandable and the product terms and conditions did not include complete eligibility information. Further, the full terms and conditions of a debt protection product may be difficult for consumers to obtain and review prior to purchasing the product. We called customer service representatives of the nine largest issuers and requested that the issuers mail us copies of the full terms and conditions of their credit card debt protection products. The customer service representatives of seven of the nine issuers told us they would not provide the terms and conditions unless we enrolled in the product. Federal regulations do not require full terms and conditions to be provided prior to purchase in every type of sale. For instance, short-form (oral) disclosures for telephone sales must include the product’s fee and the fact that it is optional and require that additional written disclosures be mailed within 3 business days of purchase. Representatives of the credit card companies provided a variety of reasons for declining to provide the full terms and conditions to consumers until after consumers purchased the product. One issuer stated that providing the full terms and conditions was impractical in connection with certain marketing channels, such as telephone calls, and another stated that it could be confusing to provide the information in advance because consumers might believe they had already purchased the product. Several issuers also noted to us that consumers could obtain the information on their Web sites. We reviewed the nine largest issuers’ Web sites and found that seven included the full terms and conditions for their debt protection products, while the remaining two did not. In general, government agencies have a wide variety of consumer information that addresses credit insurance—which, as we have seen, is no longer widely offered with credit cards—but do not have such materials specifically for debt protection products. At least 10 state insurance regulators and NAIC have taken steps to educate consumers about credit insurance, through consumer alerts, press releases, reports, or Web sites. FTC and the federal banking regulators do not have consumer education materials specific to debt protection products, although the Federal Reserve stated in its proposed revisions to Regulation Z that it planned to dedicate a Web site for consumers about debt protection products. OCC staff told us that their consumer education efforts have not focused on debt protection products because these products have not been a source of significant complaints. The new Bureau of Consumer Financial Protection will have the authority to improve consumer financial literacy through its Office of Financial Education, which is charged with developing and implementing initiatives to educate and empower consumers to make better informed decisions about financial products. Without good information about debt protection products, it may be difficult for consumers to assess this product and determine whether it represents a good value for them. Debt protection products sold in conjunction with credit cards can provide consumers with certain advantages, most notably by potentially helping to protect a cardholder’s credit rating and providing peace of mind. Regulators have reported relatively few consumer complaints and have cited few formal violations related to these products as a result of bank examinations. But, as we have seen, the fees associated with these products can be substantial, with the annual cost often exceeding 10 percent of the cardholder’s average monthly balance. Moreover, among the nine largest issuers in 2009, consumers got back 21 cents in tangible financial benefits for every dollar they paid in fees for these products. In recent years, the debt protection products sold in conjunction with credit cards have largely displaced credit insurance. In contrast to state regulation of credit insurance, which seeks to establish a reasonable relationship between the tangible financial costs and benefits of the product, federal regulation of debt protection products generally has not addressed the costs and benefits to consumers. The Dodd-Frank Act, however, transfers supervisory and enforcement authority for credit card debt protection products––among other consumer financial products and services—from the federal banking regulators to the new Bureau of Consumer Financial Protection. The bureau is specifically charged with considering consumer awareness and understanding of a product’s or service’s benefits and costs when making rules concerning disclosure. Taking such steps for credit card debt protection products would be consistent with the bureau’s mission and would help ensure that the products represented a fair value to consumers. Credit card debt protection products can be difficult for consumers to assess. Federal agencies offer relatively little consumer information specific to debt protection products, in part because they have received few complaints about them and as a result have not focused on these products in their educational efforts. The new Bureau of Consumer Financial Protection will also include an Office of Financial Education that is charged with improving consumers’ financial literacy and providing them with information that will help them evaluate credit products. Consumers would benefit from information from the bureau to help them assess whether or not credit card debt protection products represented a good choice for them. We recommend that the Bureau of Consumer Financial Protection take the following two actions: factor into its oversight and regulation of credit card debt protection products, including its rulemaking and examination processes, a consideration of the financial benefits and costs to consumers, and incorporate in its consumer financial education efforts ways to improve consumers’ understanding of credit card debt protection products and their ability to assess whether or not the products represent a good choice for them. We provided a draft of this report to the Bureau of Consumer Financial Protection, FDIC, Federal Reserve, FTC, NAIC, NCUA, OCC, and OTS for comment and we incorporated technical comments received from these agencies as appropriate. In addition, the Bureau of Consumer Financial Protection provided a written response, which is reprinted in appendix II. The bureau said that it agreed with our recommendations and intended to implement them. The bureau noted the new authorities granted to it under the Dodd-Frank Act to oversee credit card debt protection products and educate and empower consumers. NCUA also provided a written response, which is reprinted in appendix III. NCUA said it believed our conclusions were reasonable and consistent with our findings. It noted that it was pleased that our report found that the few credit unions electing to offer credit card debt protection products generally did so at rates comparatively favorable to consumers. Finally, NAIC provided a written response, reprinted in appendix IV, in which it noted that state insurance regulators monitor the relationship of benefits provided and premiums charged to evaluate the suitability of credit insurance. We are sending copies of this report to the appropriate congressional committees, the five federal banking regulators, Bureau of Consumer Financial Protection, FTC, NAIC, and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If your offices have any questions about this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Our reporting objectives were to review (1) the market for and key characteristics of debt protection products and credit insurance for credit cards, (2) federal and state regulation of these products, and (3) the advantages and disadvantages of these products for consumers. The focus of our report was on debt protection products and credit insurance for credit cards, although such products may be offered with other types of loans, including mortgages, auto loans, and home equity lines of credit. The report addresses only general purpose credit cards and not small business or private label cards used at specific retail stores. To address our first objective, we obtained data from and interviewed representatives of the nine largest credit card issuers as of December 31, 2010, as measured by outstanding balances on general purpose credit cards. These issuers, which represented about 85 percent of the general purpose credit card market, were American Express; Bank of America; Capital One; Citibank (South Dakota), N.A.; Discover; Chase Bank USA, N.A.; HSBC; U.S. Bancorp; and Wells Fargo Bank, N.A. We also obtaine d data from and interviewed representatives of three major insurance companies that offer credit card credit insurance—Aegon USA, Assurant Solutions, and Central States Indemnity—which were estimated to represent about 30 percent of the open-end credit insurance market in 200 according to data from the National Association of Insurance Commissioners (NAIC). We also interviewed representatives of the firms representing the Debt Cancellation Coalition, a coalition of credit issuers and insurance companies that offer and administer debt products. The Debt Cancellation Coalition engaged Argus Information and Ad Services, LLC, a third-party analytics firm, to collect and aggregate proprietary data that we requested related to debt protection and cred insurance products from the nine credit card issuers and three credit insurance companies noted above. We developed separate questionna for credit card issuers that provide debt protection products and for insurance companies that provide credit insurance products. The questionnaires collected information on product sales, fees, financial benefits, administrative expenses, earnings, complaints, cancellatio incentives and commissions, the marketing of these products, and characteristics of consumers who purchase them. We received com ments and technical corrections on drafts of the questionnaires from the companies that would be completing them, as well as representatives of the Debt Cancellation Coalition, Argus, and an actuarial firm, and incorporated changes as appropriate. The third-party provider, Argus, distributed the questionnaires we developed and asked the companies to submit their responses within approximately 3 weeks. We discussed wit h Argus and with representatives of the companies steps that were being taken to ensure that the data were accurate and complete, and Argus provided us with documentation of these steps. However, we did not have access to the issuers’ or insurance companies’ systems to fully assess t he reliability of the data they provided or the systems themselves, which house the data. Therefore, we present these data in our report only as representations made to us by these companies. Additionally, we gathered information on the characteristics of the issuers’ primary debt protection products by having three analy sts independently review the products’ terms and conditions. Any discrepancies among the three analysts about the products’ features, terms, or conditions were identified, discussed, and resolved by referring to the source documents provided by the nine issuers. In some instances, we contacted issuers to confirm or clarify certain aspects of the products. In coordination with the Debt Cancellation Coalition, the nine issuer s and three insurance companies also provided us with sample marketing materials, including telephone scripts used by their representatives to se the products, product brochures, promotional e-mail messages, screen shots of product Web sites, direct mail materials sent to consumers, and new card applications that include the option to purchase the products. To address our second objective, we reviewed applicable federal laws an regulations related to debt protection products, including Regulation Z, which implements the Truth in Lending Act, Section 5 of the Federal T Commission Act, and a rule from the Office of the Comptroller of the Currency (OCC) that specifically addresses debt protection products. We reviewed the compliance examination handbooks and procedures of the on five federal banking regulators––Federal Deposit Insurance Corporati (FDIC), Board of Governors of the Federal Reserve System (Federal Reserve), National Credit Union Administration (NCUA), OCC, and Office of Thrift Supervision (OTS)—and identified procedures and activities specific to debt protection products. We also obtained and the reviewed 24 compliance examination reports (representing 13 unique institutions) completed by the Federal Reserve and OCC between 2006 and 2010 that included a review of a supervised institution’s debt protection products. In addition, we conducted interviews with the fed banking regulators and the Federal Trade Commission (FTC) on their roles in overseeing debt protection products. We also reviewed model laws and regulations developed by NAIC related to credit insurance, as well as summaries of credit insurance case law in various states. Additionally, we interviewed representatives of NAIC, th credit insurance companies, and two consumer organizations for their perspectives on state regulatory oversight of credit insurance. In addition we obtained more detailed information on credit insurance regulation in s three states—California, Maine, and New York. We selected these state because they represented a range of market sizes for open-end credit insurance, use different regulatory models, and have taken a proactive regulatory oversight approach to credit insurance, according to insurancexperts and consumer advocates. For each of these states, we reviewed the state laws and regulations related to credit insurance and obtained information from representatives of the state’s insurance department. To address our third objective, we reviewed reports and studies by consumer organizations and trade groups that addressed the advantages e. We and disadvantages of debt protection products and credit insuranc also addressed these issues in interviews with representatives of individual credit card and credit insurance companies, as well as with th American Bankers Association, Consumer Credit Industry Association, Debt Cancellation Coalition, Center for Economic Justice, and Consumer Federation of America. We also interviewed staff at and received materials from CreditRe and Hause Actuarial Solutions, Inc., independent actuarial firms with expertise in credit insurance and debt protection products, . In spoke with representatives of the American Academy of Actuaries addition, we evaluated the terms and conditions of selected debt protection and credit insurance products and analyzed aggregated data that we received from the nine largest issuers and three credit insurance companies. For comparative purposes, we also gathered data on the deb protection products offered by credit unions. We obtained pricing data from CUNA Mutual, a provider of financial products and insurance to credit unions, for 179 credit unions offering debt protection products in 2009. According to CUNA Mutual, these 179 credit unions represented roughly 51 percent of the credit union debt protection market for credit t cards (as measured by number of credit unions). We analyzed aggregated loss ratio data from three credit insurance companies and, for comparative purposes, reviewed comparable ratios for group life insurance and individual disability insurance in 2009. We obtained average group life insurance loss ratios from SNL Financial, a data source that collects, standardizes, and disseminates corporate, financial, and market data. The average group life insurance loss ratio data covered all companies offering group life insurance in the United States in 2009. We obtained average disability insurance loss ratios from NAIC, which covered the top 125 insurance companies that offered accident and health insurance in the United States in 2009. We determined that these data from SNL Financial and NAIC were sufficiently reliable for the purposes of our study. In addition, we collected consumer complaint data for calendar years 2005 through 2009 from FDIC’s Specialized Tracking and Reporting System, the Federal Reserve’s CAESAR consumer complaint database, FTC’s Consumer Sentinel database, OCC’s Remedy consumer complaint database, and OTS’s consumer complaint database. To assess the reliability of data from the regulators’ databases, we reviewed documentation about these databases and interviewed agency staff who managed them. We determined that these data were sufficiently reliable for use in our report. We also reviewed data on consumer complaints obtained, in aggregated form, from our questionnaires to the nine issuers and three insurance companies. We obtained information from each of the federal banking regulators on violations and enforcement actions related to debt protection products that resulted from examinations conducted between 2006 and 2010. We also gathered information on consumer education resources related to debt protection and credit insurance products by reviewing the Web sites of the 50 state insurance departments, five federal banking regulators, FTC, and nine credit card issuers. We conducted this performance audit from January 2010 through March 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Alicia Puente Cackley, (202) 512-8678 or cackleya@gao.gov. In addition to the contact named above, Jason Bromberg (Assistant Director), Emily Chalmers, Beth Ann Faraguna, Catherine Gelb, Jamila Jones Kennedy, Michelle Liberatore, Yesook Merrill, Marc Molino, Susan Offutt, Andrew Stavisky, and Paul Thompson made key contributions to this report.
Debt protection and credit insurance products can cancel or suspend part or all of a credit card debt under specific circumstances, such as loss of life, disability, or involuntary unemployment. In response to a mandate in the Credit Card Accountability Responsibility and Disclosure Act of 2009, this report reviews these products' market share and characteristics, federal and state oversight, and advantages and disadvantages to consumers. For this report, GAO analyzed data it had requested on these products from three major credit insurers and the nine largest credit card issuers. These nine issuers represented 85 percent of the credit card market. GAO also reviewed the products' terms and conditions, related marketing materials, and applicable federal and state regulations. In 2009, consumers paid about $2.4 billion on 24 million accounts for debt protection products, according to data from the nine largest credit card issuers. Debt protection products have largely displaced credit insurance in the credit card market, although the two products are similar from a consumer's perspective. Issuers market debt protection products when consumers call their customer services lines, by direct mail, e-mail, and telemarketing, and with new credit card applications, and market the products broadly rather than to specific subpopulations. Debt protection products are banking products that are largely federally regulated, while credit insurance is an insurance product regulated by the states. Unlike state oversight of credit insurance, federal banking oversight of debt protection products does not directly address the relative financial benefits and costs of the products to consumers; instead, it focuses on compliance with disclosure requirements and prohibitions of unfair or deceptive acts or practices. The new Bureau of Consumer Financial Protection will soon assume supervisory and enforcement authority for financial products, including credit card debt protection products. Ensuring that these products represent a fair value to consumers would be consistent with the new agency's mission. Debt protection products and credit insurance can offer consumers several advantages. The products can protect a cardholder's credit rating in times of financial distress, can provide peace of mind, and are widely available and easy to purchase. Regulators have reported relatively few consumer complaints and have cited few formal violations related to debt protection products. However, fees for these products can be substantial, with the annual cost often exceeding 10 percent of the cardholder's average monthly balance. In the aggregate, cardholders received 21 cents in tangible financial benefits for every dollar spent in debt protection product fees among the nine largest issuers in 2009. These products can be difficult for consumers to understand, but federal agencies offer few educational resources to aid consumers in assessing them. GAO recommends that the Bureau of Consumer Financial Protection (1) factor into its oversight of credit card debt protection products, including its rulemaking and examination process, a consideration of the financial benefits and costs to consumers, and (2) incorporate into its financial education efforts ways to improve consumers' ability to understand and assess these products. The bureau agreed with GAO's recommendations.
EPA is organized into 13 major headquarters offices (including OECA) that are located in Washington, D.C. (App. II shows EPA's organizational structure.) These offices receive administrative, investigative, and laboratory support from numerous headquarters field offices located throughout the country. EPA also maintains 10 regional offices to implement federal environmental statutes and to provide oversight of related state activities. EPA's staff or full time equivalents (FTEs) grew by about 18 percent from fiscal year 1990 through fiscal year 1999 (see fig. 1).Over this period, the staff in EPA's headquarters—including headquarters field offices located outside of Washington, DC—and its regions grew at about the same pace, with less than half of the agency's total staff located in the regions. Data from the Office of Personnel Management (OPM) indicate that, as of the end of fiscal year 2000, about 35 percent of EPA's permanent staff were located in its Washington headquarters, 17 percent in headquarters field offices, and 48 percent in its regional offices. EPA uses contractors to perform much of its work. The agency estimates that it would need an additional 11,000 to 15,000 employees if it did not receive appropriations to fund contractors. Therefore, EPA's workforce must be adept both at delivering services directly and at effectively managing the cost and quality of services delivered by third parties on the government's behalf. OECA, with more than 3,500 FTEs nationwide, is responsible for developing policies to ensure that industries and other entities that are regulated under environmental statutes comply with the requirements of the law. Over 2,600 of these FTEs are allocated to implement enforcement policies in each of EPA's 10 regions (see fig. 2). The regional enforcement staff are specifically responsible for (1) inspecting and monitoring certain industrial and other facilities that are regulated under federal environmental statutes; (2) taking enforcement actions against those who have violated environmental statutes and regulations; (3) helping industries comply with environmental regulations; and (4) overseeing enforcement activities that EPA has delegated to states. (App. II discusses OECA's enforcement process and activities in greater detail.) As reflected in GAO's human capital checklist, our past work has demonstrated that effective performance-based management depends on senior managers' willingness and ability to strategically manage all of the agency's resources—including its human capital—to achieve its missions and goals. Specifically, it requires aligning strategic and program planning systems with explicit strategies for identifying (through workforce planning) the needed mission-critical competencies, and for recruiting, hiring, and training leaders and staff to fill identified competency gaps. These are critical components of effective human capital management and among the building blocks to achieving an organization's mission and strategic goals (see app. I). EPA's human capital strategy is a promising initial effort to develop a framework for managing the agency's workforce. Nevertheless, it does not include all of the key elements that we have identified as essential components of an effective human capital strategy. In this regard, EPA's strategy does not (1) fully integrate its human capital objectives with its strategic environmental goals; (2) identify the specific activities, milestones, and resources needed to implement the strategy; and (3) establish results-oriented performance measures to track the strategy's implementation and success. By including these elements in its strategy, EPA could better ensure that its workforce is deployed to effectively meet its strategic goals. (App. III summarizes the current status of EPA's human capital management practices and its efforts to implement its human capital strategy). EPA and federal agencies in general have not given adequate attention to human capital management in the past. However, EPA is among the agencies that have become more acutely aware of challenges facing the government in the human capital area and have taken steps to improve their approaches to building and managing their workforces. EPA's strategy recognizes the importance of better managing the agency's human capital. As we noted in our human capital checklist, an agency should develop strategies to enhance the value of its employees and focus its efforts on the agency's shared vision—its mission, vision for the future, core values, and goals and objectives. Overall, EPA's strategy is detailed and addresses most of the issues that we identified in our checklist. For example, the strategy clearly identifies the agency's vision for its people, its core values, and six major human capital goals for the next 2 years, such as attracting and retaining a highly skilled workforce and improving teamwork and collaboration among its employees. In addition, the strategy (1) discusses implementation plans, including actions for achieving each of its six human capital goals; (2) identifies the units within the agency that are responsible for developing and carrying out the implementation plans; and (3) for the most part, directly links the implementation plans to each human capital objective (see table 1). Although the strategy is a positive step towards addressing the agency's key human capital issues, it falls short in several areas. First, it does not fully integrate its human capital objectives with the agency's 10 strategic goals for protecting human health and the environment. These goals, which are identified in EPA's strategic plan prepared under the Government Performance and Results Act (GPRA), are shown in table 2. While EPA acknowledges the importance of effectively managing the agency's staff to meet its strategic goals, it does not describe how various human capital activities will help the agency to achieve these goals. EPA officials told us that, in updating the agency's strategic plan, they will integrate EPA's human capital objectives and strategies with specific strategic environmental goals. Under GPRA, the strategic plan must be updated by September 30, 2003, and may be updated earlier at EPA's discretion. EPA officials told us that they have not yet made a decision on whether to update the plan before the date required under GPRA. Second, EPA has not identified the specific activities or time required to implement its strategy. EPA's strategy identifies 18 implementation actions and related tasks for achieving its six human capital objectives. For example, to achieve the human capital objective of attracting and retaining a diverse and highly skilled workforce, EPA's strategy contains an action to develop and to implement a workforce planning system. One of the general tasks for developing this system is to establish standardized workforce planning requirements and a methodology to be used throughout the agency. During 2001, EPA plans to undertake tasks related to 11 of the 18 implementation actions. However, EPA's strategy does not identify specific milestones for completing any of the implementation actions or their related tasks. Finally, like many other federal agencies, EPA has found it difficult to establish results-oriented performance measures to track the implementation of the strategy and its success in meeting human capital objectives. EPA's fiscal year 2002 annual performance plan and budget justification identifies a number of performance measures for its workforce improvement activities under the "Effective Management" strategic goal. These measures include, among others, the number of (1) interns hired, (2) candidates in the Senior Executive Service (SES) Candidate Program, and (3) competencies addressed through training and development activities. While these measures are useful for tracking EPA's progress, they do not reflect the programmatic outcomes that the agency would like to achieve as a result of investing in human capital improvements for the strategic goal. As we, the Office of Personnel Management, and others have found, federal agencies in general have experienced difficulties in defining practical, meaningful measures that assess the effectiveness of human capital management. Yet, such measures are crucial to effectively managing for results and holding managers accountable. EPA officials told us that they plan to develop specific outcome measures, although they have yet to establish time frames for doing so. EPA has begun to recognize the importance of strategic human capital management to mission accomplishment and has taken steps to align the agency’s human capital with its mission. However, EPA, like many other agencies, still faces serious challenges that will require the sustained attention and commitment of its leaders. As EPA takes steps to implement its human capital strategy, it will face a number of challenges throughout the agency with regard to assessing workforce requirements, ensuring continuity of leadership, and hiring and training skilled staff. Specifically, EPA has not determined the number of employees it needs to accomplish its strategic goals, the competencies and technical skills they should possess, and the deployment of its current and future workforce among strategic goals, across program areas, and in various areas of the country. In addition, EPA has not prepared for the anticipated losses in leadership, institutional knowledge, and expertise that will likely occur as potentially large numbers of its senior executives retire in the near future. Nor has the agency fully addressed the need to maintain and develop mission-critical skills in areas such as environmental protection, environmental engineering, toxicology, and ecology. High-performing organizations identify their current and future human capital needs—including the appropriate number of employees, the key competencies for mission accomplishment, and the appropriate deployment of staff across the organization—and then create strategies for identifying and filling the gaps. To better plan for meeting the agency's future human capital needs, in June 1998, EPA initiated a study to identify the competencies needed to meet the agency's current and future missions. While a positive step, the study (completed in May 1999) identified only general competencies for all EPA employees, such as effective communication and collaboration. However, the study did not determine the number of employees with the identified competencies needed either agencywide or in individual organizational or geographical units. Since completing its study, EPA has made little progress in determining the right size, skills needs, or deployment of its workforce to achieve its strategic goals. As a result, it lacks the detailed information needed to make informed workforce deployment decisions, including information on (1) the relationship between its budget requests for full-time-equivalents (FTEs) and its ability to meet individual strategic goals and (2) any excesses or gaps in needed competencies within the agency's various headquarters and field components. As part of EPA's recent human capital strategy, the agency plans to develop and to implement a workforce planning system. The strategy calls for (1) linking workforce planning to the agency's strategic planning efforts, (2) securing essential competencies by recruiting and developing staff and providing incentives to retain highly competent employees, (3) continually monitoring and assessing the workforce, and (4) evaluating the effectiveness of actions taken. EPA officials told us that the agency received 20 percent less funding than requested for workforce planning in fiscal year 2001. With these funds, EPA has benchmarked other federal agencies' workforce planning activities and is investigating the possibility of partnering with another agency to develop a model for workforce planning. The importance of taking such actions is emphasized in the Office of Management and Budget's (OMB) May 8, 2001, bulletin on "Workforce Planning & Restructuring." As a first step toward restructuring federal workforces to streamline federal organizations, OMB asked agencies to identify, by June 29, 2001, supervisors and managers by occupational title, grade level, location, and the number of people that they oversee; evaluate the skills of the workforce; and provide demographics of the workforce by age, grade, retirement eligibility, and expected retirements over the next 5 years. However, because EPA has not yet performed a comprehensive workforce assessment, EPA human resource managers told us that they relied on past work, such as its workforce assessment project completed in 1999, and information provided by its headquarters and regional offices to meet OMB's June 2001 deadline. According to these managers, while this analysis provides a valuable "snapshot" of EPA's workforce and serves as a starting point for a detailed workforce assessment, it is not as comprehensive as the workforce planning effort the agency plans to conduct under its human capital strategy. Because EPA submitted its analysis to OMB as this report was being processed, we were unable to obtain and review it in time to include our evaluation of it in this report. While such information provides a general overview of the structure of EPA's workforce, EPA cannot ensure the accuracy of this information or the reliability of the information systems it uses for its human capital management. The agency has no reliable means to determine how its employees spend their time—information that is critical to assessing an agency's workforce requirements. In March 2000, we reported that EPA needs to more accurately determine how employees spend their time in order to ensure that they are being used for designated purposes. We pointed out that EPA officials had yet to assess the accuracy of the data collected under its cost accounting system, which it used to determine the number of FTEs that the agency devotes to each of its strategic goals and objectives. Furthermore, in November 2000, EPA's Inspector General noted that EPA needed to follow through on improving its cost accounting systems and that resources that EPA headquarters budgeted for environmental programs should be controlled and accounted for— including better tracking of how employees spend their time—to ensure that they are being used for designated purposes. Without accurate workforce data, EPA cannot determine (1) the appropriate number of people and competencies needed to effectively accomplish its mission or (2) the costs of carrying out its strategic goals and objectives. Agencies need to aggressively pursue comprehensive succession planning and executive development actions to address the potential loss of leadership continuity, institutional knowledge, and expertise in the Senior Executive Service (SES) ranks. These actions include (1) developing a formal succession plan based on a review of the agency's current and emerging leadership needs in light of its strategic and program planning, and (2) identifying sources of executive talent both within and outside the agency. However, EPA does not currently have in place a succession plan to ensure continuity in the agency's leadership and to prepare for the management losses that will likely occur as potentially large numbers of its senior executives retire in the near future. Fiscal year 2000 data on EPA’s workforce indicate that 57 percent of the agency's 255 senior executives are eligible to retire before fiscal year 2006. As shown in figure 3, potential retirements may create particularly severe shortages in some EPA units and regions, such as Region 8 (Denver) in which up to 83 percent of executives are eligible to retire over the next 5 years. EPA human resource managers believe that the agency is adequately prepared for a potentially large number of retirements in the near future. These managers told us that, in general, EPA has 7 to 10 qualified and experienced candidates within the agency for each SES position advertised, as well as a pool of qualified external candidates. Historically, according to these managers, SES recruitment efforts draw from 30 to 50 applicants for each vacancy, many of whom are internal candidates. Nevertheless, EPA currently has no formal succession plan based on a comprehensive workforce assessment, which could provide it greater assurance of leadership continuity. EPA has initiated a number of activities aimed at ensuring the continuity of its leadership, such as establishing an SES mentoring program and beginning a review of executive succession needs. In addition, under its human capital strategy, EPA plans to reinstitute an SES candidate program and develop a leadership succession-planning program. In these endeavors, as in many of the other positive efforts under EPA's human capital strategy, the agency has made limited progress and it is too early to determine whether its initiatives will be successful. While EPA acknowledges that it faces significant challenges in maintaining a workforce with the highly specialized skills and knowledge required to accomplish the agency's work, it has yet to fully address the need to hire and develop staff with mission-critical skills in key technical areas. In order to function as a high-performing organization, an agency needs to hire and retain a dynamic, results-oriented workforce with the talents, multidisciplinary knowledge, and up-to-date skills to ensure that it is equipped to achieve its mission. Similarly, it is crucial that agencies invest in training and developing staff to develop mission-critical skills. However, EPA currently has neither a recruiting and hiring strategy that is targeted to fill identified gaps in skills, nor a training and employee development strategy that explicitly links the agency's curricula with the specific technical skills needed to achieve the agency's mission. Moreover, as discussed above, EPA has not yet completed the crucial first step in developing these strategies: identifying the agencywide critical skills needed for mission accomplishment, the number of needed staff with these skills, and their appropriate geographical and organizational locations. According to EPA officials, once this effort is completed it will serve as the basis for targeted recruitment and training strategies to fill the identified gaps. However, EPA's human resource managers do not know when the workforce assessment will be completed. Although the agency has not completed its assessment of skills, it has identified a number of "critical occupations" that are needed to achieve its mission. These include, among others, environmental protection specialists, general biological scientists, ecologists, toxicologists, environmental engineers, general physical scientists, and health physicists. The scientists in these seven job categories accounted for 45 percent of EPA's total staff of almost 18,000 employees at the end of fiscal year 2000. About 20 percent of these scientists will be eligible for retirement before fiscal year 2006. The National Research Council recently reported on EPA's difficulty in managing its scientific workforce. The Council pointed out that EPA's scientific performance has been criticized many times in reports released by the Council, EPA's Science Advisory Board, GAO, and other organizations and "in countless criticisms and lawsuits from stakeholders with interests in particular EPA regulatory decisions." While noting EPA's significant improvements during the past decade in some of its scientific practices, the Council expressed concerns about EPA's science capabilities, including its ability to attract first-rate talent. For example, it concluded that hiring freezes within the agency and intense job market competition from the private sector and academic institutions have made it "extremely difficult" to recruit or even retain the talent needed to sustain and enhance its research workforce. The shortage in mission-critical staff could worsen as scientists reach retirement age and consider leaving the agency. Over the next 5 years, for example, EPA faces the potential loss of much of the technical expertise, which it needs to achieve its strategic goals, as potentially large numbers of the agency's scientists in some key technical areas become eligible to retire. Figure 4 shows the percentage of EPA staff in each critical occupation who will be eligible to retire by fiscal year 2006. Furthermore, some EPA organizational units may be more severely affected than others by the impending retirements of staff with critical scientific and technical skills. For example, figure 5 shows the effects of potential retirements of biological scientists on EPA's organizational units. EPA can fill the gaps in scientific and technical skills that may arise from these pending retirements through (1) targeted recruiting efforts to hire outside expertise and (2) training to ensure that current staff develop the needed technical skills. EPA has continued its recruitment efforts in recent years, placing emphasis on achieving diversity goals. Furthermore, while EPA acknowledges the need to invest in EPA employees through training, it has yet to develop an employee development strategy to meet specific scientific and technical skill gaps. EPA's Office of Inspector General emphasized the need for such a strategy in November 2000, when it identified EPA's training and employee development as a fiscal year 2000 management control weakness. To address these concerns, EPA proposes to directly link employee development to mission needs by, among other actions, developing and testing a rotational assignment program and implementing a workforce development strategy. However, EPA received no funding in fiscal year 2001 for the rotational assignment program and its workforce development strategy aims to enhance general competencies, such as communication and collaboration, rather than specific mission-critical technical skills. Managing EPA's enforcement workforce is particularly challenging because enforcement activities pervade the agency's programs and regions. Enforcement responsibilities are centralized within OECA, which is responsible for monitoring the compliance of facilities regulated by federal environmental laws and ensuring that violations are reported and that actions are taken against violators when necessary. OECA provides overall direction on enforcement policies to the regions, which carry out enforcement actions and oversee the enforcement activities of states that EPA has authorized to enforce federal environmental regulations. While OECA recognizes that the regions need to maintain an appropriate level of consistency in enforcing requirements and overseeing state enforcement programs, it acknowledges that some regional variation in environmental enforcement activities is to be expected for a number of reasons. For example, differences exist in (1) the opinions of enforcement staff about the best way to achieve compliance with environmental regulations and (2) state laws and enforcement authorities and the manner in which individual regions respond to such differences. In addition, OECA's decisions on how to deploy its enforcement staff to the regions can affect its ability to ensure the consistent enforcement of federal environmental requirements throughout the country. In this regard, we found that OECA's deployment decisions are hampered by two interrelated problems: Workforce deployment decisions do not fully consider workload changes that are known to have occurred over the past decade, such as the number of regulated facilities in individual regions that are subject to environmental inspections. Information is not collected and analyzed for key regional workload factors, such as the extent to which specific enforcement-related functions are performed and the time required to perform them. Without such information, OECA cannot determine the appropriate size, skills-mix, and location of the regional enforcement staff needed to ensure that regulated industries receive consistent, fair, and equitable treatment throughout the nation. OECA also cannot ensure effective oversight of state programs, which share with EPA responsibility for enforcing federal environmental requirements. Furthermore, without this information, OECA has no basis for systematically determining where staffing increases or reductions—such as the 8-percent reduction proposed for fiscal year 2002—should be made. OECA deploys its enforcement workforce largely on the basis of outdated workload models that were developed over a decade ago and not updated since 1989. In general, the workload models were based on the number of regulated facilities in each region and the type and amount of enforcement activities required for a particular program. While the workload models may have been an appropriate tool for allocating enforcement personnel during the 1980s, many critical changes affecting the enforcement workload have occurred over the past decade. Since the workload models were developed, (1) the number of environmental laws, regulations, and programs has increased; (2) the focus and requirements of several environmental programs have shifted; (3) states have assumed a greater role in environmental enforcement; and (4) technological advances have affected the skills and expertise needed to conduct enforcement actions. OECA officials told us that they are currently examining how OECA’s headquarters resources can best be deployed to meet their strategic goals and are working to develop a more comprehensive plan for deploying enforcement resources in the regions. EPA regions currently vary in the extent to which they enforce environmental requirements and oversee state enforcement activities. For example, as figure 6 indicates, the number of inspections conducted under the Clean Air Act in fiscal year 2000 relative to the number of facilities in each region subject to EPA's inspection under the act varied from a high of 80 percent in Region 3 to a low of 27 percent in Regions 1 and 2. Furthermore, the number of regional enforcement staff available to oversee state programs varies significantly among the 10 regions, raising questions about some regions' ability to provide consistent levels of oversight. As figure 7 indicates, differences exist in the number of state inspections performed in relation to individual OECA staff assigned to monitoring activities, which include overseeing state activities. While federal and state enforcement officials agree that basic enforcement activities should be largely consistent, some variation among regions is to be expected and, under certain circumstances, encouraged. According to EPA, for example, differences are appropriate in how each region targets its resources to address the most significant compliance issues in the region. However, OECA has not determined whether and to what extent variations in enforcement activities across regions represent (1) an exercise of flexibility in adapting national program goals to local circumstances or (2) a deployment problem that needs to be analyzed and remedied. OECA cannot fully determine the causes and appropriateness of the variations in regional enforcement activities because it does not have complete and reliable workforce planning information on these activities. Specifically, OECA does not have accurate information on (1) the universe of entities subject to regulation under federal environmental laws and (2) the time required to perform enforcement-related activities, such as assisting facilities to comply with environmental regulations. Determining the size of the universes regulated under various environmental statutes is a difficult process that relies heavily on the accuracy of EPA's data systems. However, the reliability of these systems has been challenged from sources both inside and outside of the organization. The universes regulated under various EPA statutes are based on state-provided information that is subject to change as companies are created, go out of business, reach thresholds for chemical emissions that bring them under EPA's regulatory authority, or reduce their emissions of certain chemicals to levels that are not subject to regulation. Furthermore, many state enforcement programs maintain their own databases to manage their programs and do not use EPA's national databases. Consequently, keeping the information in the EPA databases current has been a low priority for the states in an environment of limited resources. In March 2001, OECA recognized the seriousness of providing inconsistent information when reporting universes and their sizes and initiated efforts to improve the data. OECA also recognized that determining the universe of regulated entities under individual statutes will be difficult because of the complexities of environmental regulations and the number of entities involved—approximately 41 million entities ranging from community drinking water systems to pesticide users to major industrial facilities. Once it completes its initial efforts, OECA plans to periodically review its data to keep the universes as current as possible. In addition, OECA headquarters and regional managers agree that to develop an accurate workforce planning system, key fact-based information is essential to enable managers to account for the time of their enforcement staff. The data most needed include the amount of time spent in performing inspections, providing oversight of state inspections, assisting states and industrial facilities to comply with environmental requirements, and taking various legal actions when necessary to require compliance. Such managerial accounting information is generally not available to OECA's managers. The lack of such workforce planning information limits OECA's ability to determine whether regions and states are consistently meeting the requirements of EPA's enforcement program and whether significant variations from these requirements exist and should be corrected. Limitations in OECA's data on its regional activities also hamper its ability to assess the number of staff it needs; the knowledge, skills, and abilities they should possess; and where they should be deployed. With such information OECA could ensure that the right number and types of people are being hired during times of growth and that they are systematically allocated among programs and locations according to need. The information is also needed when operations are being downsized to ensure that staff reductions can be absorbed with minimal impacts on the effectiveness of operations. The administration's fiscal year 2002 budget request for enforcement activities illustrates the importance of having accurate enforcement information that can be used to inform workforce decisions. The administration proposes a new grant program under which it intends to redirect $25 million of funding for enforcement activities. Rather than using these funds to perform its enforcement activities, EPA would provide the funds to states and tribes for their enforcement efforts. An April 2001 internal OECA memorandum indicated that EPA did not expect that all states would receive grants. According to this memorandum, the agency believed that grants should be awarded based on the quality of state proposals, and estimated that approximately 15 to 25 states would receive none of this additional funding. However, subsequently, OECA received comments on the proposal from states and tribes and in July 2001, OECA officials told us that they are reconsidering their initial approach for awarding the grants. The agency is currently developing guidance that will address how the grants will be awarded. As part of the administration's proposal, EPA would reduce its enforcement staff by 270 people, or about 8 percent. EPA officials told us that staffing for OECA's headquarters and the regions will be reduced by about 51 FTEs and 219 FTEs, respectively. The staff reductions within the regions will likely be proportional to the number of staff currently assigned to them (that is, a region employing 10 percent of EPA's total regional enforcement staff would absorb 10 percent of the regional reductions, or about 22 FTEs). However, as we have noted, EPA allocates its regional enforcement staff on the basis of outdated information. EPA contends that it can absorb the staff reductions without jeopardizing its ability to effectively perform enforcement activities and to oversee the state programs to ensure that they consistently and fairly enforce environmental laws and regulations across the nation. However, without accurate workforce planning information on factors such as the amount of time required to perform inspections and oversight functions, EPA cannot demonstrate that the staff reductions will be absorbed without impairing its effectiveness. Furthermore, in some states, particularly those states that may not receive additional grant funds, it is possible that the level of enforcement activity may actually be reduced as a result of the grant program. EPA, like most federal agencies, has not consistently made strategic human capital management an integral part of its strategic and programmatic approaches to accomplishing its mission. Nonetheless, to its credit, EPA recently has recognized the importance of strategic human capital management, and is now in a good position to move forward during the next few years toward implementing the human capital practices that are associated with high performing organizations. Although EPA has recently made substantial progress in developing a strategy to more effectively manage its workforce, substantial issues remain and must be addressed to increase the likelihood of the strategy resulting in tangible programmatic results. One such issue involves integrating human capital objectives with EPA's strategic environmental goals to ensure that implementing these objectives will bear directly on the fulfillment of the strategic goals. Other issues that need to be addressed include determining when and at what cost the human capital strategy will be implemented and how its success will be measured. EPA's human capital strategy recognizes the need to deal with the major human capital management areas, such as workforce planning and employee development, that pose substantial challenges to its success. Previous initiatives to confront some of these challenges, such as obtaining accurate workforce planning data and attracting top-level scientists for the agency's research programs, have met with only limited success. Effectively implementing a strategy to overcome such challenges in a large and complex organization like EPA is not something that can be done quickly or easily. EPA will need to formulate appropriate remedies, and senior managers will need to provide sustained attention and commitment to providing sufficient priority and resources needed to carry out the corrective actions. EPA's enforcement activities, carried out by OECA, have changed greatly during the past decade as new environmental laws were enacted; the focus of existing environmental programs has changed; and the states have assumed a greater role in enforcing federal environmental regulations. The impact of these changes on the enforcement workload cannot be determined because OECA does not have complete and reliable data on the specific enforcement functions performed by regional staff and the time required to perform them. Without such data, it is not possible for OECA to strategically deploy its staff to ensure that enforcement activities are performed more consistently throughout the nation. (Similarly, other EPA entities might benefit from such data for their respective activities.) The need for complete and reliable data on the agency's regional enforcement workload, functions, and capabilities is highlighted by the administration's proposal to use $25 million of EPA's fiscal year 2002 budget for a new enforcement grant program and to eliminate 270 of EPA's enforcement staff positions. EPA currently cannot tailor such staff reductions in a manner to minimize potential adverse impacts on its enforcement program because it has no basic workforce-planning information on the number of enforcement staff it needs; the knowledge, skills, and abilities they should possess; and where they should be deployed. To ensure that EPA's human capital policies and practices are most effectively directed toward achieving the agency's mission, we recommend that the Administrator, EPA, build upon the agency's substantial progress in more effectively managing its workforce by revising the agency's human capital strategy to (1) link the strategy's action steps with the fulfillment of EPA's strategic goals, (2) identify the milestones and needed resources to implement the strategy, and (3) establish results-oriented performance measures to determine progress toward meeting the strategy's objectives. Furthermore, as EPA implements its human capital strategy over the next few years, we recommend that the Administrator better align the strategy with those of high-performing organizations by working toward developing a system for workforce allocation and deployment that is explicitly linked to the agency's strategic and program planning efforts and that is based on systematic efforts of each major program office to accurately identify the size of its workforce, the deployment of staff geographically and organizationally, and the skills needed to support its strategic goals; designing succession plans to maintain a sustained commitment and continuity of leadership within the agency based on (1) a review of current and emerging leadership needs and (2) identified sources of executive talent within and outside the agency; targeting recruitment and hiring practices to fill the agency's short- and long-term human capital needs and, specifically, to fill gaps identified through EPA's workforce planning system; and implementing training practices that include (1) education, training, and other developmental opportunities to help the agency's employees build the competencies that are needed to achieve EPA's shared vision, and (2) an explicit link between the training curricula and the competencies needed for mission accomplishment. In addition, to ensure that OECA deploys its resources most effectively and efficiently to achieve the agency's strategic goals for enforcement, we recommend that the Administrator, EPA, establish, within the context of the agency's human capital strategy, a systematic method for deploying resources to address the agency's enforcement workload in the regions. An effective methodology should take into account the workforce- planning information needed to analyze the enforcement workload and the workforce capabilities of its 10 regions. Specifically, this would include information on (1) the level of resources (FTEs) that are currently being allocated to specific enforcement activities; (2) the factors that determine the enforcement workload in each region, including, among others, the size of the regulated universe and the extent to which states conduct enforcement/compliance activities that would otherwise be EPA's responsibility; (3) the specific skills that are needed to address each region's enforcement workload and the number of employees in each region who currently possess such skills. To develop such a methodology, OECA needs to establish mechanisms for obtaining more complete and reliable data on these factors. Furthermore, this methodology would be most effective if it were linked to agencywide recruiting, hiring, and training policies and practices in order to fill identified gaps in the skills needed to perform effective enforcement actions. Finally, in redirecting enforcement resources to states and tribes, we recommend that the Administrator, EPA, before reducing the enforcement staff by 270 positions, collect and review more complete and reliable workforce-planning information than is currently available on the enforcement workload and the workforce capabilities of EPA's 10 regional offices. We provided EPA with a draft of this report for review and comment. EPA officials, including the Acting Deputy Director, Office of Human Resources and Organizational Services and the Director, Administration and Resources Management Support Staff, Office of Enforcement and Compliance Assurance, provided comments on the draft. These officials generally agreed with our findings and recommendations and offered a number of detailed clarifications, which we have incorporated where appropriate. As arranged with your office, we plan no further distribution of this report for 10 days from the date of this letter unless you publicly announce its contents earlier. At that time, we will send copies to the Chairman, Subcommittee on VA, HUD, and Independent Agencies, Senate Committee on Appropriations; the Chairmen and Ranking Minority Members of the Senate Committee on Environment and Public Works and the House Committee on Energy and Commerce; other interested Members of Congress; the Administrator, EPA; the Director of the Office of Management and Budget; and other interested parties. We will make copies available to others upon request. The letter will also be available on GAO's home page at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 3841 or Edward Kratzer at (202) 512-6553. Key contributors to this report were Vincent P. Price, Bernice Dawson, Alyssa Hundrup, Ken McDowell, Ellen Rubin, and Gregory Wilmoth. Our objectives for this review were to determine (1) the extent that EPA's strategy to improve its human capital management includes the key elements associated with successful human capital strategies, (2) the major human capital challenges facing EPA in successfully implementing its strategy, and (3) the extent to which EPA's deployment of its enforcement workforce ensures that federal environmental requirements are consistently enforced across regions. To address the first two objectives, we reviewed the EPA publication “Investing in Our People: EPA's Strategy for Human Capital, 2001 through 2003,” and analyzed information on the nature and status of EPA's key human capital management initiatives. In this regard, in July and August 2000, we asked agency officials in EPA's Office of Human Resources and Organizational Services, Office of Enforcement and Compliance Assurance, and each of its ten regions to respond to questions based on GAO's human capital self- assessment "checklist" and used the checklist as a structure for organizing and evaluating their responses. The checklist is an assessment tool that identifies the key human capital elements and underlying values that are common to high-performance organizations (see table 3). We also obtained information from EPA's Office of Human Resources and Organizational Services on the status of the agency's efforts to implement the strategy. Finally, to determine the extent to which EPA's deployment of its enforcement workforce ensures that federal environmental requirements are consistently enforced across regions, we obtained and analyzed data from EPA's 10 regions on their enforcement workforce and workload. In this regard, from August through November 2000, we worked with enforcement officials and staff in OECA's headquarters office and EPA's Region 4 to develop a comprehensive instrument to obtain information by major program on, among other factors, (1) the number and type of each region's regulated facilities, (2) the nature and number of individual enforcement activities (such as inspections and oversight activities) conducted in each region by EPA and the states, and (3) the number of staff conducting these activities. We received and began analyzing information from most regions in January and February 2001. Our analysis showed that, overall, the data were incomplete and inconsistent across regions and programs because of differences in definitions, reporting requirements, and states' willingness to provide data voluntarily. For these reasons, we were generally unable to rely on these data for drawing conclusions relating to EPA's enforcement workforce and workload. As an alternative, for our analysis on workload variations and imbalances, we relied primarily on information from OECA headquarters on enforcement FTEs in each region and standard Program Review Status reports produced annually by OECA. Our comparative analysis of these data with the information from the regions further confirmed the inconsistency of EPA's enforcement data. Our work for this review was conducted between June 2000 and July 2001 in accordance with generally accepted government auditing standards. EPA is organized into 13 major headquarters offices (including OECA), located in Washington, D.C. (see fig. 8). These offices receive administrative, investigative, and laboratory support from numerous headquarters field entities located throughout the country. EPA also maintains 10 regional offices to implement federal environmental statutes and to provide oversight of related state activities. EPA has been responsible for enforcing the nation's environmental laws since it was created in 1970. This responsibility has traditionally involved monitoring the compliance of those in the regulated community (such as factories or small businesses that release pollutants into the environment or use hazardous chemicals), ensuring that violations are properly identified and reported, and ensuring that timely and appropriate enforcement actions are taken against violators when necessary. Under many major federal environmental statutes, EPA gives states that meet specified requirements the authority to implement key programs and to enforce their requirements. In such cases, EPA establishes by regulation the minimum components of state enforcement authority, such as the authority to seek injunctive relief and civil and criminal penalties. EPA also outlines by policy and guidance its views as to the elements of an acceptable state enforcement program, such as necessary legislative authorities and the type and timing of the action for various violations, and tracks how well states comply. EPA may also take appropriate enforcement action against violators. EPA administers its environmental enforcement responsibilities through its headquarters Office of Enforcement and Compliance Assurance (OECA). While OECA provides overall direction on enforcement policies and sometimes takes direct enforcement action, it carries out much of its enforcement responsibilities through its 10 regional offices. These offices are responsible for taking direct enforcement action and for overseeing the enforcement programs of state agencies in those instances in which EPA has approved a state program. Although EPA acknowledges that some variation in environmental enforcement is necessary to take into account local conditions and local concerns, core enforcement requirements must nonetheless be consistently implemented. EPA also maintains that to ensure fairness and equitable treatment, like violations in different regions of the country should be met with comparable enforcement responses. Many major federal environmental statutes allow EPA to authorize states to administer environmental programs. One of the key conditions for authorizing state programs is that the state acquire and maintain adequate authority to enforce the federal law. For example, to obtain EPA approval to administer the Clean Air Act's title V permitting program for major air pollution sources, states must have, among other things, adequate authority to ensure compliance with title V permitting requirements and to enforce permits, including authority to recover civil penalties and provide appropriate criminal penalties. Similarly, the Clean Water Act allows EPA to approve state water pollution programs under the National Pollutant Discharge Elimination System if the state programs contain, among other things, adequate authority to issue permits that ensure compliance with applicable requirements of the act, and to abate violations, using civil and criminal penalties and other means of enforcement. EPA develops enforcement policies for these programs. The enforcement policies outline EPA's traditional regulatory approach to enforcement, including what constitutes a violation, especially the significant violations that are likely to require an enforcement action. When a violation is discovered, the policies generally require an escalating series of enforcement actions, depending on the seriousness of the violation and the facility's level of cooperation in correcting it. Actions might start with a verbal warning, or a warning letter, and escalate to administrative orders requiring a change in the facility's practices. These enforcement policies also define timely and appropriate enforcement actions for various types of violations. In the most serious cases, EPA or the states can assess penalties or refer the case to the U.S. Department of Justice or a state’s Office of Attorney General for prosecution. The monetary penalties that EPA assesses include two amounts: one amount based on the seriousness of the violation and the other amount designed to remove any financial advantage the violator obtained over its competitors through noncompliance. EPA may also pursue criminal enforcement action if the situation warrants. Whether EPA or state personnel take the lead in taking enforcement actions depends on whether the state has been authorized to administer the program. If EPA retains the program, the cognizant EPA regional office generally takes the lead in monitoring compliance and taking enforcement actions, often with support and/or guidance of EPA headquarters program offices, OECA, and the Office of General Counsel. EPA's polices provide guidance to the states that have been authorized to administer the enforcement program. Moreover, EPA's regions and the states work together each year to establish enforcement expectations and lay out their respective roles. EPA also provides grant funds to states to assist in the implementation of the federal programs and, under certain circumstances, conditions receipt of grant funds on compliance with EPA guidance. EPA oversees the states' enforcement in a variety of ways, including reviewing inspection reports and enforcement actions and accompanying state inspectors. EPA also requires states to report information on various aspects of their enforcement efforts, such as the number and type of inspections the state has taken, the results of those inspections, and any enforcement actions resulting from discovered violations. EPA's enforcement policy under the Clean Air Act and Clean Water Act concentrates primarily on large facilities and large sources of pollution. States have more autonomy in determining how they will enforce the law at smaller sources and smaller facilities. EPA officials use a number of methods to oversee regional and state enforcement programs. An important first step is the biennial Memorandum of Agreement between EPA headquarters and the regions, which contains the core program requirements and national priorities that both headquarters and the regions agree must be addressed. In addition to the national priorities, the agreements with each individual region contain region-specific priorities that OECA reviews and approves. The regions share this agreement with their states so that all key parties understand the regions' goals and commitments with headquarters. Senior OECA managers visit the regions during the year to review regional progress in meeting the agreed-upon enforcement goals and commitments in the memorandum and to make mid-year corrections. OECA also sponsors national meetings, schedules routine conference calls between headquarters and regional media program staff, and conducts periodic evaluations of regional enforcement programs. EPA regional enforcement program staff frequently communicate with state enforcement staff through routinely scheduled telephone conferences. In addition, a number of regions have implemented protocols for overseeing state performance. In July 2000, we submitted a set of questions based on our human capital self-assessment "checklist" to officials in EPA's Office of Human Resources and Organizational Services. We asked these officials to provide us with information on the extent to which EPA's human capital policies and practices exhibited the principals that we had identified as being associated with high-performing organizations. The following table provides a summary of their responses, organized by the key elements as defined in our checklist. The table also provides information, as of June 2001, on the status of EPA's efforts to implement its human capital strategy as they relate to each checklist element.
During the last decade, as most federal agencies downsized, the Environmental Protection Agency's (EPA) workforce grew by about 18 percent. Much of this growth occurred in EPA's 10 regional offices, which carry out most of the agency's efforts to encourage industry compliance with environmental regulations. Currently, EPA's workforce of 17,000 individuals includes scientists, engineers, lawyers, environmental protection specialists, and mission-support staff. Some Members of Congress have questioned whether EPA is giving enough attention to managing this large and diverse workforce. The workforce management practices of EPA's Office of Enforcement and Compliance Assurance (OECA)--which takes direct action against violators of environmental statutes and oversees the environmental enforcement activities of states--have come under particular scrutiny because its enforcement activities span all of EPA's programs and regions. Although EPA has began several initiatives during the last decade to better organize and manage its workforce, it has not received the resources and senior-level management attention needed to realize them. This report reviews (1) the extent to which EPA's strategy includes the key elements associated with successful human capital strategies, (2) the major human capital challenges EPA faces in the successful implementation of its strategy, and (3) how OECA deploys the enforcement workforce among EPA's 10 regions to ensure that federal environmental requirements are consistently enforced across regions either by OECA or by states with enforcement programs that OECA oversees. GAO found that EPA's November 2000 human capital strategy is a promising first step towards improving the agency's management of its workforce, but it lacks some of the key elements that are commonly found in the human capital strategies of high-performing organizations. EPA's major challenges in human capital management involve assessing the work requirements for its employees, ensuring continuity of leadership within the agency, and hiring and developing skilled staff. OECA does not systematically deploy its workforce to ensure the consistent enforcement of federal regulations throughout all EPA regions and bases deployment decisions on outdated and incomplete information on key regional workload factors.
Both the federal government and the states share responsibility for administering the Medicaid program. At the federal level, CMS is responsible for overseeing states’ design and operation of their Medicaid programs, and ensuring that federal funds are appropriately spent. The federal government sets broad federal requirements for Medicaid—such as requiring that state Medicaid programs cover certain populations and benefits—while states administer their respective Medicaid programs’ day-to-day operations under their state plans. State responsibilities include, among other things, determining eligibility, enrolling beneficiaries, and adjudicating claims. Medicaid is funded jointly by the federal government and states. The federal government’s share of most Medicaid expenditures is based on a statutory formula—the FMAP. Under the FMAP, the federal government pays a share of Medicaid expenditures based on each state’s per capita income relative to the national average. The formula is designed such that the federal government pays a larger portion of Medicaid costs in states with lower per capita incomes (PCI) relative to the national average. Regular FMAP rates have a statutory minimum of 50 percent and a statutory maximum of 83 percent. For fiscal year 2014, regular FMAP rates ranged from 50.00 percent to 73.05 percent. Under PPACA, state Medicaid expenditures for certain Medicaid enrollees are subject to higher federal matching percentages. . We refer to FMAPs that are calculated using this formula as regular FMAP rates. subsequently deemed eligible under PPACA if the state opted to expand Medicaid under PPACA. The 2014 Medicaid enrollees consist of: 1. Traditionally eligible enrollees—individuals who are eligible under historic eligibility standards; states receive their regular FMAP for incurring expenditures related to this population. 2. PPACA-expansion enrollees—individuals who would not have been eligible under the rules in effect on December 1, 2009, and whose coverage began after their state opted to expand Medicaid as authorized by PPACA, and 3. State-expansion enrollees—individuals who were not traditionally eligible, but were covered by Medicaid under a state-funded program or pre-existing state demonstration as of December 1, 2009, in states that subsequently opted to expand Medicaid as authorized under PPACA. In states that choose to expand their Medicaid programs as authorized by PPACA, the federal government will provide an FMAP of 100 percent beginning in 2014 to cover expenditures for the PPACA-expansion enrollees. The increased FMAP will gradually diminish to 90 percent by 2020. States will also receive an FMAP above the state’s regular match for their Medicaid expenditures for the state-expansion enrollees, ranging from 75-92 percent in 2014. This FMAP will gradually increase and will eventually equal the FMAP for the PPACA-expansion enrollees beginning in 2019. (See table 1.) Consequently, a state that chooses to expand its Medicaid program could potentially receive three different FMAPs for its different types of Medicaid enrollees. States are primarily responsible for verifying eligibility and enrolling Medicaid beneficiaries. These responsibilities include verifying and validating individuals’ eligibility at the time of application and periodically thereafter, and promptly disenrolling individuals who are not eligible. Although states have the flexibility to use different sources of information and processes to verify eligibility factors, CMS guidelines call upon states to maximize automation and real-time adjudication of Medicaid applications through the use of electronic verification policies and the use of multiple application channels, including health insurance exchanges— whether federally facilitated exchanges (FFE) or state-based exchanges (SBE)—to implement PPACA’s coordinated eligibility determination process. Under this process, individuals can apply for health coverage through their state’s Medicaid agency or its health insurance exchange, whether an FFE or an SBE, and regardless of which route they choose, their eligibility will be determined for coverage under the appropriate program. Consequently, FFEs and SBEs are designed to make assessments of Medicaid eligibility. As of November 6, 2014, 17 states had SBEs and 34 states had FFEs. Of these 34 FFE states, 10 had delegated authority to the FFEs to make Medicaid eligibility determinations for individuals applying through the exchanges. In the remaining states, an FFE’s assessment that an applicant may be eligible for Medicaid is subject to a final eligibility determination by the state Medicaid agency, which is also the process followed in the SBE states. Moreover, PPACA required states to use third party sources of data to verify eligibility to the extent practicable. Consequently, states have had to make changes to their eligibility systems including implementing electronic systems for eligibility determination and coordinating systems to share information. In addition, states have had to make changes to reflect new sources of documentation and income used for verification. Federal regulations require states to develop and submit their Medicaid eligibility verification plans to CMS for approval. As part of its oversight role, CMS oversees state enrollment of beneficiaries and reporting of expenditures. In addition to reviewing state verification plans for assessing Medicaid eligibility, CMS requires states to conduct certain reviews to assess the accuracy of states’ Medicaid eligibility determination processes through its Medicaid Eligibility Quality Control (MEQC) and Payment Error Rate Measurement (PERM) programs. MEQC is overseen by CMS and requires states to report to CMS every six months on the accuracy of their Medicaid eligibility determination processes. States can choose to participate in traditional MEQC or MEQC pilots, with the majority of states choosing to participate in the MEQC pilots. While the traditional MEQC requires states to report error rates for 6 month periods, MEQC pilots can be for a year and—for the annual pilots— states are required to report on an annual basis by August 1st of each year. Pilots that are less than a year have 60 days from the end of the pilot to report findings. CMS implemented the PERM to measure improper payments in Medicaid—including payments made for treatments or services that were not covered by program rules, that were not medically necessary, or that were billed for but never provided—in response to the requirements of the Improper Payments Information Act of 2002, as amended. Under the PERM, CMS measures and reports to Congress improper payment rates in three component areas: (1) fee- for-service claims, (2) managed care, and (3) eligibility. To assess improper payments attributable to erroneous eligibility determinations, the PERM includes state-conducted eligibility reviews that are reported to CMS. Under the MEQC and PERM, state Medicaid staff were required to review all the documentation for a sample of both positive and negative eligibility cases—that is, both individuals who were determined to be eligible, and those determined to be ineligible and thus denied enrollment—and identify any improper payments for services. In light of the changes to Medicaid eligibility standards and state eligibility systems necessitated by PPACA, CMS announced that the agency has suspended the MEQC program and the eligibility portion of the PERM until fiscal year 2018. During this period, according to CMS, PERM managed care and fee-for-service payment reviews will continue uninterrupted, and CMS will continue to report Medicaid improper payment rates based on that data. In addition, CMS will report an estimated improper payment rate for the eligibility component based on historical data. As a temporary replacement to the MEQC and PERM eligibility reviews, CMS implemented a pilot eligibility review to assess states’ determination of eligibility and eligibility type for fiscal year 2014 through fiscal year 2017. States develop their own approaches to testing their eligibility determinations under the pilot eligibility review, but must submit descriptions of their proposed methodology to CMS for review and approval. According to CMS’s instructions for the pilot eligibility reviews, at a minimum, states must draw a sample of at least 200 eligibility determinations, including both positive and negative determinations. For these sample cases, states must review all caseworker action taken from initial application to the final eligibility determination. Among other factors, for each case reviewed, states must assess the correctness of decisions relating to program eligibility and eligibility group (i.e., whether an enrollee was correctly identified as a traditionally eligible enrollee, a PPACA- expansion enrollee or a state-expansion enrollee). For each error identified, states are required to develop a corrective action plan to avoid similar errors in the future. States were required to have one round of the pilot eligibility reviews completed by the end of June 2014, a second round completed by the end of December 2014, and subsequent reviews to be completed in 2015, 2016, and 2017. As part of its oversight responsibilities, CMS also conducts CMS-64 expenditure reviews. As we have previously reported, the agency collects and reviews aggregate quarterly expenditure information from the states through its CMS-64 form, which is used to reimburse states for their Medicaid expenditures. The CMS-64 data set contains program-benefit costs and administrative expenses at a state aggregate level—such as a state’s total expenditures for such categories as inpatient hospital services and prescription drugs—and these reported expenditures are not linked to individual enrollees. State Medicaid agencies typically submit this information to CMS 30 days after a quarter has ended. CMS regional office staff review expenditures submitted through CMS-64 for reasonableness and to determine whether reported expenditures are allowable in accordance with Medicaid rules, and use the data to compute the federal share for each state’s Medicaid program expenditures. If, during the CMS-64 expenditure review, CMS is uncertain as to whether a particular state expenditure is allowable, then CMS regional offices may recommend that CMS defer the expenditure pending further review. PPACA- and state-expansion enrollees comprised about 14 percent of Medicaid enrollees at the end of the last quarter in calendar year 2014. Additionally, these enrollees comprised about 10 percent of total Medicaid expenditures for 2014 enrollees. As of June 2, 2015, approximately 69.8 million individuals were recorded as enrolled in Medicaid at the end of the last quarter of calendar year of 2014. Most of these individuals—about 60.1 million—were traditionally eligible enrollees—comprising about 86 percent of total enrollees. About 9.7 million of the 2014 enrollees—approximately 14 percent—were PPACA-expansion or state-expansion enrollees, with 7.5 million (11 percent of all Medicaid enrollees) as PPACA-expansion enrollees and 2.3 million (3 percent of all Medicaid enrollees) as state-expansion enrollees. (See figure 1 for information on Medicaid enrollment in the last quarter of calendar year 2014 and appendix III for information comparing enrollment for all four quarters in 2014.) As of June 2, 2015, states had reported $481.77 billion in Medicaid expenditures for services in calendar year 2014. Of this total, expenditures for traditionally eligible enrollees were $435.91 billion (comprising about 90 percent of total expenditures), about $35.28 billion (7 percent of total expenditures) was for PPACA-expansion enrollees and $10.58 billion (2 percent of total expenditures) was for state-expansion enrollees. (See figure 2 and appendix IV for more information on 2014 Medicaid expenditures.) Overall, the federal share of Medicaid expenditures was approximately 61 percent of spending for Medicaid services in 2014. For traditionally eligible enrollees, the percentage of federal spending was 58 percent of total Medicaid expenditures for this population. For PPACA-expansion enrollees, the overall proportion of federal spending was 100 percent, and for state-expansion enrollees, the overall proportion of federal spending was 74 percent. CMS has implemented reviews that (1) assess the accuracy of eligibility determinations, and (2) examine states’ expenditures to ensure they are attributed to the correct eligibility group. However, both reviews contain gaps that limit CMS’s ability to ensure that expenditures for the different eligibility groups are appropriately matched with federal funds. CMS has implemented interim efforts to assess states’ Medicaid eligibility determinations by requiring states to conduct pilot eligibility reviews. States conduct these reviews to assess the correctness of their decisions related to program eligibility and eligibility group, which defines the amount of federal matching funds for eligible individuals. To implement the changes required by PPACA to streamline and automate the Medicaid enrollment process, states had to make significant changes to their systems and develop new policies and procedures. In recognition of the states’ need to redesign their Medicaid business operations and systems, CMS designed these pilot eligibility reviews to provide more timely feedback on the accuracy of states’ eligibility determinations than under previous assessments, and allow for quicker corrective action. According to CMS, the pilot eligibility reviews (1) provide state-by-state programmatic assessments of the performance of new processes and systems in adjudicating eligibility; (2) identify strengths and weaknesses in operations and systems leading to errors; and (3) test the effectiveness of corrections and improvements in reducing or eliminating those errors. States have completed the initial round of pilot eligibility reviews, which showed wide variation in both the design and the results among the states—reflecting, in part, the latitude they were given in designing their review methodology. Although the results varied, pilot eligibility reviews for eight of the nine states we examined identified eligibility determination errors, improper payments associated with those errors, and described the states’ plans for corrective action to prevent similar errors. For subsequent rounds, CMS revised its guidance. For example, CMS updated instructions for the second round to include standard definitions for errors and deficiencies, and to require the inclusion of eligibility redeterminations in the review, and plans to further refine the instructions for future rounds. Based on these updated instructions, the results of the future rounds of pilot eligibility reviews may result in more comparable information. However, the pilot eligibility reviews do not include a review of the accuracy of federal eligibility determinations in certain states that delegated authority to the federal government to make Medicaid eligibility determinations through the FFE. Officials from the National Association of Medicaid Directors told us that states had raised concerns earlier that federal determinations were incorrect, citing challenges related to transferring information between federal exchanges and state systems. Additionally, we recently reported that states using FFEs experienced challenges transferring applications and transmitting information between state and federal data sources, which contributed to enrollment delays. CMS has established another mechanism—termed the eligibility support contractor pilot program—to assist in developing new methodologies for assessing eligibility determinations; however, the eligibility support contractor program generally does not assess federal determinations for accuracy. Therefore, for the states in which the federal government performs eligibility determinations, there is a gap in assuring that the determinations are accurate. According to CMS officials, the purpose of the eligibility support contractor program—along with the pilot eligibility reviews—is to inform revisions to the eligibility component of the PERM, which will be resumed in 2018. In the interim, CMS uses the eligibility support contractor to assist CMS in developing a methodology for the future PERM eligibility review, including a methodology for assessing federal eligibility determinations. The contractor will make recommendations to CMS on necessary changes to the methodology used to test eligibility determinations for the MEQC and PERM. As a result, under the current process, CMS will not be able to assess the accuracy of federal eligibility determinations until 2018, thereby creating the potential risk for improper payments in the states that have delegated authority to the federal government to make eligibility determinations through the FFEs. Federal internal control standards require that federal agencies identify and assess risks associated with achieving agency objectives. One method for identifying the risk of inaccurate eligibility determinations could include consideration of findings from audits and other assessments. However, neither of the interim measures—the pilot eligibility reviews or the eligibility support contractor program— implemented by CMS will identify risks for improper payments due to erroneous federal determinations. According to CMS officials, the agency excluded federal determinations from the pilot eligibility reviews states must conduct because these states do not have the resources to fully review the federal determinations. Moreover, CMS officials noted that a review of federal determinations—which are independent of a state’s own process—would not assist states in correcting their own eligibility determination processes. However, a review of federal eligibility determinations would help CMS assess whether the FFEs are appropriately determining an applicant’s eligibility for Medicaid. CMS modified its standard quarterly review of CMS-64 expenditures to examine expenditures for both categories of the expansion population. As part of this modified review, CMS staff must select a sample of different types of enrollees—including at least 25 PPACA-expansion eligible enrollees, 10 state-expansion eligible enrollees (where applicable), and 5 traditionally eligible enrollees—and examine their expenditures to ensure that they were reported as expenditures for the correct eligibility type. According to CMS officials, the expenditure review is primarily intended to ensure that states are correctly grouping expenditures for the different eligibility groups as initially determined, not whether the determination is correct. For example, the review assesses whether the expenditures for someone the state has determined to be a PPACA-expansion enrollee are submitted for the PPACA-expansion eligibility group. In our review of the pilot eligibility reviews, we found that eight of the nine states we reviewed reported errors that reflected both incorrect eligibility determinations and errors in the eligibility determination process that did not result in an incorrect determination. For example Eight of the nine states reported errors that resulted in incorrect eligibility determinations, including enrollment of individuals with insurance or incomes exceeding Medicaid standards. Total improper payment amounts among these states ranged from $20 to approximately $48,000 across their samples of approximately 200- 300 eligibility determinations. One of the eight states reported as an error its failure to send out notification letters to some enrollees within the correct timeframe—but this error did not affect the accuracy of the eligibility determination. We found that errors were often related to income verification, inadequately trained staff, or challenges transmitting information between exchange and Medicaid databases. States described the corrective actions they planned to take for each error identified in their pilot eligibility reviews. Although the changes CMS has made to the CMS-64 expenditure review have enabled the agency to identify certain types of erroneous expenditures for the expansion population, these reviews may not be able to identify expenditures that are erroneous due to incorrect eligibility determinations, such as those identified in the state pilot eligibility review examples above. As a result, CMS’s expenditure review cannot provide assurance that states’ expenditures are correctly matched based on enrollees’ eligibility categories. CMS officials told us that the CMS-64 expenditure review process is not informed by the findings of the pilot eligibility reviews. Thus, if a state’s pilot eligibility review identified errors in the state’s eligibility determinations or automated eligibility systems, CMS is not using that information to target its CMS-64 review of that state’s expenditures for PPACA-expansion enrollees. For example, none of the eight states we examined that reported eligibility determination errors in their pilot eligibility reviews were identified as having eligibility- related expenditure errors by CMS regional offices. As a result, CMS is missing the opportunity to better assure that the appropriate federal matching rate is being applied to states’ expenditures. Federal internal control standards require that federal agencies identify and assess risks associated with achieving agency objectives. In addition, such information should be communicated to others within the agency to enable them to carry out their internal control responsibilities. Although the purposes of the CMS-64 expenditure review are distinct from the eligibility review, the information gained from the pilot eligibility reviews on state eligibility determination errors could be useful in identifying potentially erroneous expenditures that require further review by CMS. PPACA authorized many significant changes to the Medicaid program, such as expanded eligibility and streamlined eligibility processes between Medicaid and the exchanges. However, implementing these changes requires states to adapt their systems, policies, and procedures, resulting in a complex realignment of processes, and necessitating careful review by CMS to ensure that determinations of eligibility and the reporting of expenditures are accurate. As CMS redesigns its oversight and monitoring tools to better capture the changes brought about by PPACA to Medicaid eligibility and federal matching funds, the agency has implemented measures to inform its processes for assessing states’ eligibility determinations and reporting of expenditures. However, in the short term, CMS is missing opportunities to better ensure the accuracy of eligibility determinations in all states, and also ensure that Medicaid expenditures for different eligibility groups are appropriately matched with federal funds. By excluding Medicaid eligibility determinations made by the FFEs from its pilot eligibility reviews, CMS has created a gap in efforts to ensure that only eligible individuals are enrolled into the Medicaid program. Furthermore, although CMS has a process for assessing the accuracy of eligibility determinations in the states, CMS does not use the results of these eligibility reviews, which have the potential to provide valuable information on state eligibility determinations, to better target its review of Medicaid expenditures for different eligibility groups. Using the eligibility reviews to inform its reviews of state-reported expenditures may assist CMS in identifying payments made on behalf of ineligible or incorrectly enrolled individuals, thereby reducing the risk of improper payments in the Medicaid program. To improve the effectiveness of its oversight of eligibility determinations, we recommend that the Administrator of CMS conduct reviews of federal Medicaid eligibility determinations to ascertain the accuracy of these determinations and institute corrective action plans where necessary. To increase assurances that states receive an appropriate amount of federal matching funds, we recommend that the Administrator of CMS use the information obtained from state and federal eligibility reviews to inform the agency’s review of expenditures for different eligibility groups in order to ensure that expenditures are reported correctly and matched appropriately. We provided a draft of this report to HHS for comment. In its written comments, HHS highlighted the actions the department has taken to ensure the accuracy of Medicaid eligibility determinations made through the exchanges, citing the multi-layer verification processes in place to assess applicant eligibility, and also noted that it conducts reviews of expenditure data submitted by the states. HHS agreed with our first recommendation and agreed with the concept of our second recommendation. HHS concurred with our first recommendation to conduct reviews on federal Medicaid eligibility determinations to ascertain the accuracy of these determinations and institute corrective action plans where necessary. HHS noted that federal eligibility determinations in two states are currently being reviewed by the eligibility support contractor, and stated that federal determinations will be included as part of the future PERM eligibility review. However, the eligibility component of the PERM will not be resumed until 2018, and in the interim, without a systematic assessment of federal eligibility determinations, we remain concerned that CMS lacks a mechanism to identify and correct federal eligibility determination errors and associated payments. Given the program benefits and federal dollars involved, we urge CMS to look for an opportunity to identify erroneous federal eligibility determinations and implement corrective actions as soon as possible. With regard to our second recommendation, HHS agreed that ensuring accurate eligibility determinations and correct expenditure reporting is an important safeguard for the Medicaid program but did not state whether it specifically concurred with the recommendation. HHS further noted that eligibility and expenditure reviews are two distinct, but complementary oversight processes, with different timeframes. In consideration of HHS’s comments, we adjusted our recommendation to take into account the differences in the timeframes for these two types of reviews. We continue to believe that using the information obtained from state and federal eligibility reviews to inform the agency’s review of expenditures for different eligibility groups will help ensure that expenditures are reported correctly and matched appropriately. Eligibility reviews are conducted on a different timeframe than the expenditure reviews, and because states are required to identify errors and develop corrective action plans to address these errors, it is anticipated that, over time, the eligibility reviews will support HHS’s efforts to appropriately match state expenditures. HHS’s comments are reproduced in appendix I. HHS also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of CMS, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-7114 or yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To determine the enrollment and spending for individuals who enrolled in Medicaid in 2014, and the extent to which these individuals were identified as eligible under the Patient Protection and Affordable Care Act (PPACA), we examined data submitted to the Centers for Medicare & Medicaid Services (CMS) by states as part of their enrollment and expenditure reporting. These data included information from new enrollment forms developed by CMS that are used by states to report the number of enrollees by eligibility type, as well as expenditure data, to CMS by means of the Quarterly Medicaid Statement of Expenditures for the Medical Assistance Program—also known as the form CMS-64— within the Medicaid Budget and Expenditure System (MBES). We reviewed data for each quarter in calendar year 2014 and relevant guidance and documentation where available. We also interviewed knowledgeable CMS officials in the Center for Medicaid and CHIP Services about data available on Medicaid enrollment and expenditures, and what steps they take to ensure data reliability. Based on these discussions, we determined that these data were sufficiently reliable for our purposes. States submit total enrollment and aggregate actual total quarterly Medicaid expenditures on the CMS-64 no later than 30 days after the end of each quarter. However, states may continue to submit additional data for each quarter on a continual basis and make adjustments to the previous three quarters submitted. States may report expenditures up to a period of two years (possibly more) after the date of the original service payment. Because these are point-in-time estimates, the data are current as of the date we pulled the data from MBES. States do not necessarily report consistently for each eligibility or service category or quarter. For example, at the time of our review of the data, of the 28 states that had expanded Medicaid, 21 had reported enrollment data for PPACA-expansion eligible enrollees for December 2014 and 14 had reported enrollment data for the state-expansion individuals for December 2014. Some states had reported data for both groups. We obtained enrollment and expenditure data for calendar year 2014— the first full year that states had the option of expanding Medicaid under PPACA. This includes the first through fourth quarters of the 2014 calendar year (ending March, June, September, and December 2014, respectively). Because data are reported for each month, we use the last month of the quarter to report for that quarter. For example, we used the numbers reported for March 2014 as the numbers reported by states for the first quarter of 2014. We extracted these data from the MBES on June 2, 2015. We reviewed the data for reasonableness and consistency, including screening for missing data, outliers, and obvious errors. While enrollment data may be identified for a particular month in a quarter, expenditure data may not be identified for a particular month in a quarter because it is reported cumulatively for each quarter and added each subsequent quarter in the year. Beginning in January 2014, states and territories also began reporting enrollment data. CMS implemented a new form—the CMS-64.Enroll form—to collect information on total enrollment and enrollment eligibility type (e.g., PPACA-expansion enrollees and state-expansion enrollees). These data show the numbers of beneficiaries who were enrolled at any time during each month. This would include, for example, beneficiaries who may have been enrolled at the beginning of June and were no longer enrolled at the end of June. Because the enrollment data are point-in-time estimates, we were unable to add the numbers of enrollees across quarters to obtain the total number of Medicaid enrollees for the year. Individuals might be enrolled continuously and adding up each month would count the same individuals multiple times. The CMS-64 data are used to reimburse the states for the applicable federal share of Medicaid expenditures. As we previously stated, CMS reviews these submissions, and the data are the most reliable accounting of total Medicaid expenditures. We extracted expenditure data from the CMS-64 net expenditures Financial Management Report for calendar year 2014. The Financial Management Report is an annual account of states’ program and administrative Medicaid expenditures, including federal and state expenditures by expenditure category. This source includes expenditures under Medicaid demonstrations, as well as adjustments by states or CMS and collections. Expenditure data from the CMS-64 may not have been reviewed by CMS. Additionally, these data do not tie expenditures to services provided to particular individuals during the reporting period. Table 2 shows the number of individuals enrolled in Medicaid at any time during the last month of each quarter in 2014, by eligibility group. As shown, Patient Protection and Affordable Care Act (PPACA)-expansion enrollees and state-expansion enrollees comprised a small portion of total enrollees in all quarters of 2014. These are point-in-time estimates—that is, counts of enrollees for the last month in each quarter. These numbers should not be added across quarters to obtain the total number of Medicaid enrollees for the year because doing so might count the same enrollees multiple times. Table 3 reflects Medicaid expenditures paid by eligibility group, in 2014. As shown, expenditures for Patient Protection and Affordable Care Act (PPACA)-expansion enrollees and state-expansion enrollees comprised a small portion of total Medicaid expenditures in 2014. In addition to the contact named above, Robert Copeland, Assistant Director; Christine Davis; Sandra George; Giselle Hicks; Drew Long; Jasleen Modi; Giao N. Nguyen; and Emily Wilson made key contributions to this report.
Historically, Medicaid eligibility has been limited to certain categories of low-income individuals, but PPACA, enacted on March 23, 2010, gave states the option to expand coverage to nearly all adults with incomes at or below 133 percent of the federal poverty level, beginning January 1, 2014. States that do so are eligible for increased federal matching rates for enrollees receiving coverage through the state option to expand Medicaid under PPACA, and where applicable, enrollees in states that expanded coverage prior to PPACA's enactment. GAO was asked to examine Medicaid enrollment and expenditures, and CMS oversight of the appropriateness of federal matching funds. This report examines (1) Medicaid enrollment and spending in 2014 by different eligibility groups; and (2) how CMS ensures states are accurately determining eligibility, and that expenditures are appropriately matched. GAO analyzed enrollment and expenditure data for enrollee eligibility groups submitted by states to CMS, examined relevant federal laws and regulations, internal control standards, CMS guidance and oversight tools, and interviewed CMS officials. PPACA-expansion and state-expansion enrollees—individuals who were not eligible under historic Medicaid eligibility rules but are eligible under (1) a state option to expand Medicaid under the Patient Protection and Affordable Care Act (PPACA), or (2) a state's qualifying expansion of coverage prior to PPACA's enactment—comprised about 14 percent of Medicaid enrollees and about 10 percent of Medicaid expenditures at the end of 2014. According to GAO's analysis of state reported data, of the approximately 69.8 million individuals recorded as enrolled in Medicaid, about 60.1 million were traditionally eligible enrollees, comprising about 86 percent of the total; about 7.5 million (11 percent of all Medicaid enrollees) were PPACA-expansion enrollees, and 2.3 million (3 percent of all Medicaid enrollees) were state-expansion enrollees. With regard to expenditures, states had reported $481.77 billion in Medicaid expenditures for services in calendar year 2014. Of this total, expenditures for traditionally eligible enrollees were $435.91 billion (about 90 percent of total expenditures), expenditures for PPACA-expansion enrollees were about $35.28 billion (7 percent of total expenditures), and expenditures for state-expansion enrollees were $10.58 billion (2 percent of total expenditures). Proportion of Medicaid Enrollees by Eligibility Group, Last Quarter of Calendar Year 2014 The Centers for Medicare & Medicaid Services (CMS), which oversees Medicaid, has implemented interim measures to review the accuracy of state eligibility determinations and examine states' expenditures for different eligibility groups, for which states may receive up to three different federal matching rates. However, CMS has excluded from review federal Medicaid eligibility determinations in the states that have delegated authority to the federal government to make Medicaid eligibility determinations through the federally facilitated exchange. This creates a gap in efforts to ensure that only eligible individuals are enrolled into Medicaid and that state expenditures are correctly matched by the federal government. In addition, CMS reviews of states' expenditures do not use information obtained from the reviews of state eligibility determination errors to better target its review of Medicaid expenditures for the different eligibility groups. An accurate determination of these different eligibility groups is critical to ensuring that only eligible individuals are enrolled, that they are enrolled in the correct eligibility group, and that states' expenditures are appropriately matched with federal funds for Medicaid enrollees, consistent with federal internal control standards. Consequently, CMS cannot identify erroneous expenditures due to incorrect eligibility determinations, which also limits its ability to ensure that state expenditures are appropriately matched with federal funds. GAO recommends that CMS (1) review federal determinations of Medicaid eligibility for accuracy, and (2) use the information obtained from the eligibility reviews to inform the expenditure review, and increase assurances that expenditures for the different eligibility groups are correctly reported and appropriately matched. In its response, the agency generally concurred with these recommendations.
To reduce out-of-pocket costs that result from cost sharing and the utilization of non-Medicare covered services and items, FFS Medicare beneficiaries may either purchase a private supplemental insurance policy, known as a Medigap plan, or enroll in a private health plan that has contracted to serve Medicare beneficiaries. From 1998 through 2003, M+C, Medicare’s private health plan program, allowed participation by a variety of plan types, including HMOs, PPOs, and PFFS plans, as long as these plans met certain organizational and operational requirements. Unlike in the private insurance market, where PPO plans were the most prevalent type of health plan, the vast majority of M+C plans were HMOs. CMS launched two demonstrations that included plans intended to operate under the PPO model. Beneficiaries in FFS Medicare, which consists of Medicare part A and part B, may incur substantial out-of-pocket costs. Part A helps pay for inpatient hospital, skilled nursing facility, hospice, and certain home health services, although beneficiaries remain liable for a share of the cost of most covered services. For example, Medicare requires beneficiaries to pay a deductible for each hospital benefit period, which was $840 in 2003, and covers a maximum of 90 days per benefit period. Medicare part B helps pay for selected physician, outpatient hospital, laboratory, and other services. Enrollment in part B is voluntary and requires a beneficiary to pay a monthly premium and an annual deductible for most types of part B services — $58.70 and $100, respectively, in 2003 – and may require coinsurance of up to 50 percent for some services. Beneficiaries are also liable for items and services not covered by FFS Medicare, such as routine physical examinations and most outpatient prescription drugs. Many beneficiaries in FFS obtain more comprehensive coverage through supplemental health insurance provided by a former employer or purchased from a private insurer (Medigap). Although many employers do not offer supplemental health insurance to their retirees, Medigap policies are available nationwide. In most states, Medigap policies are organized into 10 standardized plans offering varying levels of supplemental coverage. Medigap plan F is the plan most widely selected by beneficiaries, although it does not offer prescription drug coverage; Medigap plan I offers similar coverage but includes some coverage for prescription drugs. Beneficiaries with Medigap policies receive coverage for services from any provider who is legally authorized to provide Medicare services. Most beneficiaries may also obtain more comprehensive coverage by choosing to receive Medicare benefits through private health plans that participate in Medicare instead of through FFS Medicare. While private Medicare health plans are not available nationwide, about 80 percent of beneficiaries in 2003 had access to at least one plan within the counties where they lived. Beneficiaries who enroll in a private plan may pay a monthly premium, in addition to the Medicare part B premium, and agree to receive their Medicare-covered benefits, except hospice, through the plan. In return, beneficiaries may receive additional non-Medicare benefits and may be subject to reduced cost sharing for Medicare-covered benefits. Beneficiaries who enroll in a private plan that contracts with Medicare are entitled to coverage for all services and items included in the plan’s benefit package, regardless of whether the service or item is covered under FFS Medicare. Congress created the M+C program, in part, to expand health plan options for beneficiaries. Previously, private plan participation in Medicare had been largely limited to HMOs. The M+C program provided the opportunity for PPOs and PFFS plans to serve beneficiaries as well. Generally, M+C plan types differed by the extent to which they used provider networks. M+C HMOs were required to maintain networks of providers, and they generally covered services furnished only by providers in their networks, except in limited circumstances such as urgent or emergency situations. (See table 1.) M+C PPOs were also required to maintain provider networks. Unlike M+C HMOs, M+C PPOs were required to pay for covered services obtained from non-network providers, although they could charge beneficiaries additional cost sharing for these services. A third type of M+C plan, the PFFS plan, was not required to maintain provider networks. Rather, M+C PFFS plans were required to pay for all covered services obtained from any provider authorized to furnish Medicare-covered services who accepted the plan’s terms and conditions of payment. While many M+C requirements were uniform across the different types of plans, two categories of requirements varied by the type of plan: those that were intended to ensure that enrollees had sufficient and timely access to covered services, known as access-to-services requirements, and those that were intended to ensure that services furnished were of sufficient quality, known as quality assurance requirements. (See table 2.) In general, plans that restricted enrollees to provider networks were subject to more extensive access-to-services and quality assurance requirements than those that did not. Accordingly, M+C HMO plans were subject to more extensive quality assurance and access-to-services requirements than M+C PFFS plans. M+C PPOs were subject to the more extensive access-to- services requirements of M+C HMOs, but the less extensive quality assurance requirements of PFFS plans. For example, in order to demonstrate that they provided sufficient access to services, M+C HMOs and M+C PPOs were required to monitor and document the timeliness of the care their enrollees received from providers, while M+C PFFS plans were not required to monitor care in this way. With regard to quality assurance, M+C HMOs each year had to initiate a multi-year quality improvement project, such as a provider or enrollee education program, while M+C PPOs and M+C PFFS plans were not subject to this requirement. M+C HMOs, M+C PPOs, and M+C PFFS plans all were paid a monthly payment per enrollee according to a statutory formula. The M+C payment rate varied by county and could be higher or lower than FFS Medicare’s per capita spending in a county. An M+C plan was at full risk for the costs of covered services for its enrollees. If these costs made up a higher than anticipated portion of the plan’s total revenues—consisting of enrollee premiums and monthly payments from CMS—then the plan would have less than it anticipated for administration, profit, and other contingencies. In recent years, PPO plans have become increasingly prevalent in the private insurance market and tended to displace other types of plans, such as HMOs, that offered less provider choice. From 1996 through 2002, the percentage of individuals with employer-sponsored coverage who were enrolled in HMO plans decreased from 31 percent to 26 percent, while the percentage of individuals with employer-sponsored coverage enrolled in PPOs increased from 28 percent to 52 percent. In contrast, there were approximately 3,000 Medicare beneficiaries enrolled in a total of six M+C PPO plans by 2003. From 1998 through 2003, the total number of M+C plans, the vast majority of which were HMOs, decreased from 346 to 155. The number of beneficiaries covered by M+C plans also fell, from 6.1 million in 1998, or about 16 percent of all beneficiaries, to 4.6 million in 2003, or about 11 percent of all beneficiaries. Section 402(a) of the Social Security Amendments of 1967 authorizes CMS to conduct demonstrations to identify whether changes in methods of payment or reimbursement in Medicare and other specified health care programs would increase the efficiency and economy of those programs without adversely affecting the quality of services. In addition, under section 402(b), CMS may waive requirements relating to payment or reimbursement for health care services in connection with these demonstrations. For example, CMS may be able to offer demonstration plans alternative methods of payment or other financial incentives that are not offered to other providers in the Medicare program. However, CMS does not have the authority to waive rules not related to payment or reimbursement. Prior to the passage of MMA, CMS launched both the M+C Alternative Payment Demonstration and the Medicare PPO Demonstration. The M+C Alternative Payment Demonstration began in 2002 and included one organization offering a PPO in 2003. It is set to expire in December 2004. The Medicare PPO Demonstration, which began in 2003, included 17 organizations representing 33 plans. This demonstration is set to expire in December 2005. Using its authority to waive requirements related to payment and reimbursement, CMS offered financial incentives to Independence Blue Cross and the plans in the Medicare PPO Demonstration that they did not offer to typical M+C plans. These incentives included potentially higher payments and the opportunity to reduce their exposure to financial risk by entering into risk-sharing agreements. CMS also allowed the plans to exceed the limits on the cost sharing that M+C plans could charge beneficiaries. Under federal law, plans in the Medicare PPO Demonstration should have been required to allow beneficiaries to obtain plan services from providers of their choice, as long as those providers were legally authorized to furnish them and accepted the plans’ terms and conditions of payment. CMS did not have authority to waive this requirement, as it was unrelated to payment or reimbursement. However, CMS improperly allowed 29 of the 33 plans in the Medicare PPO Demonstration to require, as a condition of coverage for certain services, that beneficiaries obtain those services only from network providers. Under its authority to waive requirements related to payment for demonstration participants, CMS offered demonstration PPOs a number of financial incentives to participate in the demonstrations. By waiving the M+C requirements applicable to plan payment, CMS offered Independence Blue Cross and the plans in the Medicare PPO Demonstration an opportunity to receive payment rates that could be higher than those received by M+C plans. Per enrollee per month, demonstration PPOs received the higher of the county-based M+C rate or a rate based on the average amount Medicare spent in that county for each FFS beneficiary. A plan’s ability to receive the higher of the M+C rate or FFS-based rate could substantially increase its payment rates, depending on the counties it served. In 44 of the 214 counties where the plans in the Medicare PPO Demonstration were available in 2003, the FFS-based rate ranged from approximately 0.3 percent to 15.1 percent higher than the M+C payment rate. For example, in Clark County, Nevada, the FFS-based rate was $635.79, or 5.6 percent higher than the M+C payment rate of $599.95. CMS also used its waiver authority to allow Independence Blue Cross and the plans in the Medicare PPO Demonstration to reduce their financial risk through risk-sharing agreements. Risk-sharing agreements were not available to non-demonstration M+C plans, which were required to accept full financial risk for the cost of providing covered services to their enrollees. For contract year 2003, CMS signed risk-sharing agreements with 13 organizations offering a total of 29 plans. The terms of the agreements varied. Each agreement specified an expected “medical loss ratio” (MLR), the percentage of a plan’s annual revenue (comprised of monthly payments from CMS and any enrollee premiums) that would be spent on medical expenses. Generally, plans could designate the remaining percentage of revenue for administrative expenses, profit, and other contingencies. For the 12 organizations in the Medicare PPO Demonstration that had risk-sharing agreements with CMS, medical expenses represented a median 87 percent of plan revenue. CMS agreed to share a designated percentage, negotiated separately with each plan, of any difference between the plan’s actual MLR and the expected MLR that fell outside a range around the expected MLR, known as a risk corridor. For each plan, the designated percentage with which it would share risk with CMS was identical whether the actual MLR was greater or lower than the expected MLR. For example, a plan’s contract might have specified an MLR of 87 percent, a percentage of shared risk of 50 percent, and a risk corridor of 2 percent above and below the expected MLR (See fig. 1.) If that plan’s actual medical expenses exceeded 89 percent of its revenue, CMS would pay the plan 50 percent of the amount by which the actual MLR exceeded 89 percent. If the plan’s actual MLR was lower than 85 percent of its revenue, the plan would pay CMS 50 percent of the amount that the actual MLR fell below 85 percent. In order to allow organizations with HMO licenses to offer PPO-model health plans without having to meet the more stringent quality assurance requirements of M+C HMOs, CMS had organizations sign PFFS contracts and also waived certain M+C payment requirements. Of the 33 plans in the Medicare PPO Demonstration, 13 were offered by organizations with HMO licenses. Under M+C requirements, a PPO offered by an organization licensed as an HMO would have to adhere to the more stringent quality assurance standards applicable to HMOs. CMS indicated that it could permit licensed HMOs to establish PPO-type networks without being subject to the more stringent quality assurance requirements applicable to HMOs by structuring their plans as PFFS plans. CMS contracted with all plans participating in the Medicare PPO Demonstration as M+C PFFS plans because M+C did not prohibit organizations licensed as HMOs from offering PFFS plans. Although M+C requires PFFS plans to pay each class of provider uniformly, CMS waived this payment-related requirement, thereby enabling these plans to establish provider networks by paying providers differently depending on whether they belonged to their networks. CMS also waived the M+C limits on beneficiary cost sharing. An M+C plan may set beneficiary cost-sharing requirements that differ from those in FFS Medicare, but these requirements are subject to statutory limits that vary by plan type. For example, under M+C rules for PFFS plans, the actuarial value, or estimated dollar value, of the cost-sharing requirements for benefits that CMS requires the plans to cover could not exceed the actuarial value of cost-sharing requirements in FFS Medicare, which was about $1,200 annually per beneficiary in 2003. Because CMS waived this provision for demonstration PPOs, these plans were subject to no statutory or regulatory cost-sharing limits. Because CMS signed PFFS plan contracts with all of the plans in the Medicare PPO Demonstration, these plans should have been subject to all PFFS plan requirements. In particular, by federal law, M+C PFFS plans were required to allow enrollees to receive all covered services from any provider who is legally authorized to provide Medicare services and accepts the plans’ terms and conditions of payment. CMS does not have the authority to waive this requirement because it pertains to beneficiary access to providers, not payment. However, CMS allowed 29 of the 33 plans in the Medicare PPO Demonstration to establish provider networks and to exclude coverage for some services, both those covered and not covered by FFS Medicare, obtained outside the provider network. Examples of such services include skilled nursing and home health, which are covered under FFS Medicare, and dental care and routine physical examinations, which are not covered under FFS Medicare. In response to our inquiries, CMS, in a letter dated June 15, 2004, agreed with our view that the restriction of Medicare-covered services to network providers by plans in the Medicare PPO Demonstration violated Medicare requirements. The agency noted, however, that the plans did not place such coverage restrictions on most services in their benefit packages. In its letter, CMS said that it would instruct plans in the Medicare PPO Demonstration to provide out-of-network coverage for Medicare-covered services in 2005, if they want to continue to operate as PFFS plans and avail themselves of the quality assurance requirements available to M+C PFFS plans. However, CMS indicated that it would not require plans that cover non-Medicare services only in network to provide out-of-network coverage for these services. We maintain that the Medicare PPO Demonstration plans’ restriction on coverage of services obtained outside their provider networks is unlawful. The Social Security Act does not distinguish between Medicare and non- Medicare-covered services with respect to an M+C PFFS plan’s obligation to cover plan benefits. According to the law, M+C PFFS plans must allow enrollees to obtain all covered plan services– both Medicare-covered and non-Medicare-covered—from any provider authorized to provide the services who accepts the plans’ terms of payment. Furthermore, allowing plans in the Medicare PPO Demonstration to limit coverage of certain benefits to network providers is inconsistent with statutory and regulatory requirements intended to promote quality of care for beneficiaries in M+C plans. Under M+C, PFFS and PPO plans were held to less extensive quality assurance requirements than HMOs due, in part, to the greater choice these plans’ enrollees have in obtaining services from providers. However, plans in the Medicare PPO Demonstration were allowed to restrict beneficiary choice of provider for certain services but were not held to the quality assurance standards that apply to M+C plans that restrict choice. Demonstration PPOs did little to expand access to private Medicare health plans for beneficiaries who lacked such access. In addition, they enrolled relatively few beneficiaries, less than 1 percent of those living in counties where they operated. Furthermore, beneficiaries who enrolled in Medicare PPO Demonstration plans were far more likely to have switched from an M+C plan instead of FFS Medicare. About 98 percent of the beneficiaries who lived in counties with demonstration PPOs had other Medicare private health plans available. Although demonstration PPOs provided beneficiaries with an additional plan option in the counties where they operated, they did little to attract private health plans to counties where no M+C plans existed. In October 2003, demonstration PPOs were available in 214 counties nationwide, where approximately 10.1 million beneficiaries resided. Some form of M+C plan was available in 205 of the 214 counties. (See table 3.) About 200,000 of the 10.1 million beneficiaries, or about 2 percent, lived in the nine counties where only demonstration PPOs were available. (See fig. 2.) Enrollment in demonstration PPOs was relatively low. Of the 10.1 million eligible Medicare beneficiaries living in demonstration PPO counties, about 98,000, or less than 1 percent, had enrolled by October 2003. (See table 4.) These 98,000 enrollees represented about 5 percent of the total enrollment in Medicare private health plans in demonstration PPO counties. Enrollment in demonstration PPOs was particularly low in the nine counties with no M+C plans. In these counties, only about 100 of the approximately 203,000 beneficiaries living there enrolled. Two plans, Independence Blue Cross and Horizon Healthcare of New Jersey, accounted for more than 70 percent of all demonstration PPO enrollment. (See fig. 3.) Of the approximately 98,000 beneficaries enrolled in demonstration PPOs, about 23,000, or 23 percent, were enrolled in Independence Blue Cross, the one PPO plan in the M+C Alternative Payment Demonstration. Approximately 47,000, or about 48 percent of all demonstration PPO enrollees, were enrolled in Horizon Healthcare of New Jersey, a participant in the Medicare PPO Demonstration. The approximately 28,000 remaining beneficiaries were enrolled in the 32 other plans in the Medicare PPO Demonstration. These plans had an average enrollment of 878 beneficiaries. The Medicare PPO Demonstration largely did not fulfill CMS’s goal of attracting beneficiaries from FFS Medicare; most beneficiaries who enrolled in demonstration PPOs came from M+C plans. Specifically, in the 211 counties where plans participating in the Medicare PPO Demonstration were available, 26 percent of beneficiaries who were enrolled in Medicare PPO Demonstration plans were formerly enrolled in FFS Medicare, while 74 percent of these beneficiaries were formerly enrolled in M+C plans. In these same counties, 1 percent were enrolled in demonstration PPO plans, 81 percent were enrolled in FFS Medicare, and approximately 18 percent of Medicare beneficiaries were enrolled in M+C plans. The disproportionately high enrollment in demonstration PPOs by previous enrollees in M+C plans is partially attributable to Horizon Healthcare of New Jersey, which terminated its M+C HMO plan at the end of 2002 and offered a demonstration PPO plan in 2003 in the same 21 counties where its HMO had operated in 2002. Nearly all 45,000 beneficiaries who enrolled in the Horizon demonstration plan in the beginning of 2003 were previously enrolled in the HMO plan that the demonstration plan replaced. However, even when Horizon enrollees are excluded from the analysis, 47 percent of enrollees in the other Medicare PPO Demonstration plans were previously enrolled in M+C plans. According to CMS estimates available on the Medicare Web site, an average beneficiary aged 65 to 69 enrolled in a demonstration PPO could expect to incur $391 per month in health care expenses for premiums, cost sharing, and utilization of noncovered items and services. This amount was generally similar or higher than the expected out-of-pocket costs associated with other types of health care coverage. Excluding premiums, or focusing on beneficiaries in poor health, however, somewhat changed the pattern of relative cost by type of coverage. To the degree that enrollees in demonstration PPO plans obtained services from non-network providers, their average out-of-pocket costs would have been higher than CMS estimates. Despite the same or higher estimated out-of-pocket costs, demonstration PPOs may have offered slightly better coverage for certain items and services, such as prescription drugs and inpatient hospitalization. In 41 counties with approximately 90 percent of enrollment in demonstration PPOs, beneficiaries in demonstration PPOs who used only network providers were estimated to have incurred average monthly out- of-pocket costs of $391. That amount is similar to what beneficiaries with Medigap plans F and I would have incurred, which averaged $405 and $397, respectively. (See fig. 4.) Enrollees in M+C HMO and M+C PPO plans and FFS Medicare were estimated by CMS to have incurred lower monthly out-of-pocket costs, averaging $349 and $340, respectively. The highest monthly out-of-pocket costs were estimated to have been incurred by beneficiaries in M+C PFFS plans, which averaged $423 per month. Because the reported out-of-pocket costs were averages across beneficiaries, the difference among types of plans represent the variation in plans’ premiums, covered benefits, and cost sharing—not the characteristics of enrollees. Monthly premiums, which represent a predictable expense, accounted for a relatively high percentage (26 percent) of expected out-of-pocket costs in demonstration PPOs compared to FFS Medicare and M+C plans. Demonstration PPOs had an average monthly premium of $100, which was higher than the average premium of M+C plans ($35 for M+C HMOs and PPOs and $86 for M+C PFFS plans) and lower than the average premium for the two Medigap plans ($139 for plan F and $172 for plan I). (See fig. 5.) Excluding premiums, out-of-pocket costs in demonstration PPOs were somewhat lower than M+C plans, but higher than Medigap plans. Specifically, beneficiaries could expect an average of $231 per month in demonstration PPOs, $254 in M+C HMOs and M+C PPOs, and $277 in M+C PFFS plans. Beneficiaries with Medigap plans F and I could expect monthly expenses for cost sharing and noncovered items and services to total $205 and $150, respectively. Relative out-of-pocket costs for beneficiaries in demonstration PPOs also depended on their expected health status. For beneficiaries expected to be in poor health, demonstration PPOs were estimated to be less costly than FFS Medicare, M+C HMOs and M+C PPOs, and M+C PFFS plans but more costly than Medigap plans F and I. (See fig. 6.) For beneficiaries expected to be in excellent health, demonstration PPOs were estimated to be less costly than M+C PFFS plans and Medigap plans F and I, but more costly than FFS Medicare and M+C HMOs and M+C PPOs. To the degree that enrollees in demonstration PPOs obtained services from non-network providers, their average out-of-pocket costs would have been higher than those reflected on the Medicare Web site. Most demonstration PPOs excluded at least one service from coverage if it was furnished by non-network providers. When beneficiaries obtained services that were covered outside their plans’ provider networks, they were required to pay more in cost sharing relative to what they would have paid for the same services from network providers. Demonstration PPOs anticipated that at least some enrollees would obtain covered services from non-network providers. According to 2004 estimates submitted to CMS by organizations participating in the Medicare PPO Demonstration, a median of 11 percent of enrollee medical costs would be associated with covered services from non-network providers and thus higher cost sharing. For example, a six-night stay in a network hospital in 2003 was projected to cost a demonstration PPO enrollee an average of $421, while the same length of stay in a non-network hospital cost an average of $1,223. Across all services in the Medicare benefit package that were covered both within and outside the plans’ provider networks, the plans projected to CMS that, in 2004, enrollees would bear a median of 7 percent of the costs of those services if they obtained them from network providers, while they would bear a median of 15 percent of the costs of those services if they obtained them outside the provider networks. Although demonstration PPOs had higher enrollee out-of-pocket costs than M+C plans, except M+C PFFS plans, demonstration PPOs tended to offer slightly better coverage for some benefits, such as prescription drugs and inpatient hospitalization. While all beneficiaries living in counties with demonstration PPOs had at least one demonstration PPO with a prescription drug benefit operating in their county, only 61 percent had an M+C HMO or M+C PPO plan with a drug benefit operating in their county, and none had an M+C PFFS plan with a drug benefit operating in their county. In 16 of the 41 counties in our sample, at least one demonstration PPO and one M+C HMO or M+C PPO offered prescription drug coverage. In these counties, demonstration PPOs offered drug coverage that resulted in the same out-of-pocket costs for beneficiaries as the drug coverage offered by M+C HMO and M+C PPO plans ($167 per month), but higher out-of-pocket costs than the drug coverage offered by Medigap plan I ($124 per month). Demonstration PPOs were more likely than M+C HMO and M+C PPO plans to cover brand-name drugs in counties where both types of plans offered drug coverage. About 47 percent of the demonstration PPOs in our sample offered coverage for brand-name drugs, while 37 percent of M+C HMO and M+C PPO plans covered brand-name drugs. All demonstration PPOs, M+C HMOs, and M+C PPOs offered some coverage for generic drugs in these counties. M+C PFFS plans did not offer any drug coverage. Medigap plan I did not differentiate between generic and brand-name drug coverage. For example, in Hillsborough County, Florida, beneficiaries could choose between five different plans offering prescription drug coverage in 2003; one demonstration PPO, three M+C HMO plans, and Medigap plan I. (See table 5.) The demonstration PPO provided both generic and brand-name drug coverage and required a $12 copayment per prescription for generic drugs, a $55 copayment per prescription for brand-name drugs, and capped coverage for all drugs at $750 annually. None of the M+C HMOs covered brand-name drugs. However, two of the M+C HMOs offered unlimited coverage for generic drugs, while the third capped coverage at $500 per year. The three M+C HMOs charged between $7 and $15 per prescription. Insurers in Hillsborough County offered the standard Medigap plan I drug coverage: a $250 annual deductible, 50 percent of all costs, and a $1,250 annual limit. Medigap plan I does not differentiate between generic and brand-name drugs. Compared to M+C HMOs and M+C PPOs, FFS Medicare, and M+C PFFS plans, demonstration PPOs tended to offer lower out-of-pocket costs related to inpatient hospitalization. In 2003, a six-night stay in a network hospital would have cost enrollees in demonstration PPOs an average of $421, while the same six-night stay would have cost enrollees in M+C plans and FFS Medicare an average of $636 and $840, respectively. A six- night hospitalization for an enrollee in an M+C PFFS plan would have cost an average of $750. In contrast, beneficiaries with either of the two Medigap policies would have paid nothing for a six-night hospital stay. At the time the demonstrations were launched, CMS’s OACT projected that demonstration PPOs would increase Medicare spending by about $100 million over 2002 and 2003 combined. Specifically, OACT projected that the PPO plan in the M+C Alternative Payment Demonstration would increase Medicare spending by a total of $25.2 million over 2002 and 2003 combined, or $750 per enrollee per year, due to higher plan payments and CMS’s sharing in the plan’s financial risk. The Medicare PPO Demonstration was projected to increase Medicare spending by a total of $75 million in 2003, or $652 per enrollee per year, due to plan payments. The risk-sharing agreements with Medicare Demonstration PPO plans were not projected to result in additional Medicare spending. CMS does not yet have data on the actual cost of the demonstrations in 2003. CMS’s OACT projected that for 2002 and 2003 additional payments to demonstration PPOs would increase Medicare spending. According to its estimates, an average of 16,800 beneficiaries per month would be enrolled in Independence Blue Cross, the PPO in the M+C Alternative Payment Demonstration, in 2002 and 2003, and monthly payments for these beneficiaries would increase Medicare spending by $4.5 million in 2002 and $5.6 million in 2003, or about $300 per enrollee per year. OACT projected that plans in the Medicare PPO Demonstration would have an average monthly enrollment of 115,000 in 2003, and that monthly payments to plans for these enrollees would increase Medicare spending by $75 million, or about $652 per enrollee during the year. OACT projected that Medicare spending would increase as a result of its risk-sharing agreement with Independence Blue Cross. OACT projected that the plan’s actual MLR would be greater than the MLR the plan projected in 2002 and 2003. OACT estimated that Medicare’s share of the difference between the actual and projected MLR would be $4.8 million in 2002 and $10.3 million in 2003, or an average of $450 per enrollee per year. In contrast, CMS expected that it would neither save nor incur additional expenses from risk-sharing under any of the agreements in the Medicare PPO Demonstration, because OACT projected that the actual MLR would equal the projected MLR. At present, it is too early to determine the actual costs of the demonstrations in 2002 and 2003. As of July 2004, risk-sharing agreements had not yet been reconciled for any demonstration PPOs. During the reconciliation process, plans will report their actual MLRs to CMS, and depending on the difference between the expected and actual MLR, payment may be made either by the plan to CMS, or by CMS to the plan under the terms of the risk-sharing agreement. CMS also has not completed a more recent estimate of the cost of the demonstrations, which would compare spending for actual enrollment in demonstration PPOs with projected spending on enrollment in other M+C plans and FFS Medicare if the demonstrations did not exist. Enrollment in demonstration PPOs has been different than OACT anticipated, which would affect such a comparison. Actual monthly enrollment in Independence Blue Cross averaged 21,840 in 2002 and 22,835 in 2003, somewhat higher than the estimated average monthly enrollment of 16,800 in both years. Conversely, enrollment in the Medicare PPO Demonstration in 2003 was roughly half of projected enrollment. While OACT estimated an average monthly enrollment of 115,000 across all participating plans in that demonstration, the actual average monthly enrollment was 61,738. In addition to differing levels of enrollment, the demonstrations also experienced much higher than anticipated enrollment by former enrollees of other M+C plans. CMS initiated two demonstrations to expand the number of Medicare health plans operating like PPOs. To encourage participation in the demonstrations, CMS used its statutory authority to provide financial incentives to plans, such as payment rates that exceeded M+C rates and the opportunity to share financial risk with Medicare. CMS also allowed plans in the Medicare PPO Demonstration to require, as a condition of coverage for certain services, that enrollees obtain care for those services only from network providers. However, such a requirement is inconsistent with federal law for plans in the demonstration, and CMS did not have the authority to allow plans to restrict enrollees’ choice of providers so long as they were authorized Medicare providers who accepted the plans’ terms and conditions of payment. Despite CMS’s efforts, demonstration PPOs have not yet proven to be an attractive option for beneficiaries or the Medicare program. The plans were primarily offered in areas where M+C plans were already available, and enrollment has been relatively low, even in the few areas where no M+C plans existed. According to the estimates available to beneficiaries on the Medicare Web site, enrollees in demonstration PPOs could expect out-of-pocket costs that were higher than those they would have incurred in FFS Medicare or M+C plans, other than M+C PFFS plans, and no less than those they would have incurred with Medigap plans F and I. In addition to potentially higher costs for beneficiaries, demonstration PPOs may also have resulted in $100 million in higher Medicare spending in 2002 and 2003, according to initial CMS estimates. We recommend that the Administrator of CMS promptly instruct plans in the Medicare PPO Demonstration to provide coverage for all plan services furnished by any provider authorized to provide Medicare services who accepts the plans’ terms and conditions of payment. In written comments, CMS agreed to implement our recommendation and said it is working to ensure that Medicare PPO Demonstration plans come into compliance with the provisions that govern their Medicare participation. CMS also expressed general concern about the tone of the report and said that beneficiaries benefit from increased access to PPOs. The agency stated that lessons learned from the Medicare PPO Demonstration will help the agency implement the new Medicare Advantage regional PPO plan option in 2006. CMS’s specific comments largely fell into four areas: the report’s focus on initial demonstration outcomes, the inclusion of the PPO plan in the M+C Alternative Payment Demonstration in the analysis, the methodology and data we used to illustrate potential out-of-pocket costs for the options available to beneficiaries, and the discussion of our conclusion that CMS exceeded its statutory authority with respect to the Medicare PPO Demonstration. A summary of CMS’s specific comments and our evaluation is provided below. The full text of CMS’s written comments is reprinted in appendix III. The agency also provided technical comments, which we incorporated as appropriate. First, CMS stated that the report, by focusing on the Medicare PPO Demonstration’s initial outcomes, did not adequately present the context and value of the demonstration. CMS said that the demonstration is an experiment designed to increase availability of the PPO model in the Medicare setting, and that it will provide valuable lessons for nationwide implementation of the new Medicare Advantage regional PPO component in 2006. Because the demonstration was not intended to be a fully developed program, CMS felt that our characterization of enrollment as “low” was unwarranted. CMS also stated that the financial arrangements developed for this demonstration, such as the risk-sharing agreements, were intended to encourage plans to participate, and they provide an example of how Medicare can encourage PPOs to enter and remain in the new Medicare Advantage program. We were asked to evaluate the initial experience of demonstration plans operating under the PPO model because this experience could help inform future efforts to incorporate private plans into Medicare. We state in the report that our findings apply only to 2003, the first year of the Medicare PPO Demonstration and the second year of the M+C Alternative Payment Demonstration. We based our evaluation on enrollment in demonstration PPOs, the out-of-pocket costs Medicare beneficiaries could expect in demonstration PPOs relative to other types of coverage, and the effect of demonstration PPOs on Medicare spending. Overall, we found that less than 1 percent of the beneficiaries living in counties where demonstration PPOs operated had enrolled in demonstration PPOs, that most of the enrollees came from M+C plans, and that demonstration PPOs did not offer lower estimated out-of-pocket costs than most other types of Medicare coverage, even if beneficiaries obtained services only from network providers. PPO plans in the demonstrations could receive higher payment rates and be subject to less financial risk, relative to M+C plans. We acknowledge that the demonstrations are continuing and that CMS has contracted for independent evaluations of the demonstrations. Second, CMS stated that the inclusion of the Independence Blue Cross PPO from the M+C Alternative Payment Demonstration, along with the plans from the Medicare PPO Demonstration, was potentially confusing and did not adequately distinguish the different objectives of the two separate demonstrations. According to CMS, the purpose of the M+C Alternative Payment Demonstration was simply to prevent health plans from leaving the M+C program by offering alternative payment arrangements. Furthermore, CMS stated that the demonstration was not designed to encourage alternative delivery systems in general or the PPO model specifically, and that Independence Blue Cross’s status as a PPO was irrelevant. We thought it appropriate to evaluate the Independence Blue Cross plan and the Medicare PPO Demonstration plans together because the plan types were similar and because the demonstrations were conducted under the same statutory authority. Independence Blue Cross and the Medicare PPO Demonstration plans all operate under the PPO model, and in that sense the plans in the two demonstrations are indistinguishable to beneficiaries. While the purposes of the M+C Alternative Payment Demonstration and the Medicare PPO Demonstration differed, as our report states, CMS used the same statutory authority to conduct both demonstrations. This authority permits demonstrations that are designed to identify whether changes in methods of payment or reimbursement in Medicare would increase the efficiency and economy of the program without adversely affecting the quality of services. CMS’s characterization in its comments of the purpose of the M+C Alternative Payment Demonstration appears to be inconsistent with the statutory authority. Third, CMS expressed concerns with the methodology and data we used to compare the out-of-pocket costs beneficiaries could expect to incur in demonstration PPOs with those they could expect to incur with other types of Medicare coverage. In CMS’s opinion, our comparison was hypothetical because it was based on estimates of enrollees’ utilization of services, not actual utilization of services, and potentially unreliable because it may not account for regional variation in health care costs. CMS also stated that our findings for Medicare beneficiaries aged 65 to 69 may not be applicable for older beneficiaries. Finally, CMS stated that including Horizon Healthcare of New Jersey in our analysis may have skewed our calculations because it had the largest in-network deductible for inpatient hospital services of all demonstration PPOs. Our out-of-pocket cost comparisons used the same estimates that CMS makes available on the Medicare Web site through the Medicare Personal Plan Finder (MPPF), which is intended to help beneficiaries compare their health coverage options. These estimates, developed by Fu for CMS, enabled us to compare out-of-pocket costs among various types of coverage for beneficiaries of various ages and health statuses, which actual utilization data would not have enabled us to do. Fu developed these estimates by applying utilization and spending data from the Medicare Current Beneficiary Survey (MCBS), a national sample of beneficiaries, to the 2003 benefit packages and premiums offered locally by various types of Medicare coverage. Therefore, the estimates for all types of coverage were derived consistently. If utilization and spending in our sample were higher than the national average, then actual out-of- pocket costs would have been higher than those we estimated; however, the relative differences between the types of coverage—which form the basis for our finding—would be expected to be similar. In conducting our comparisons, we sought to capture the typical plan options available to all eligible Medicare beneficiaries—not only PPO enrollees—residing in areas with demonstration PPOs. To capture the typical plan option in these areas, we chose a sample of 41 counties containing 90 percent of enrollment in demonstration PPOs and weighted our calculations by the number of eligible beneficiaries residing in each county. Horizon Healthcare of New Jersey remained in our analysis because Horizon’s demonstration PPO plan was available to 32 percent of all eligible beneficiaries in these 41 counties in December of 2003. We presented results for beneficiaries aged 65 to 69, the largest of the six Medicare age groups for which Fu calculated out-of-pocket cost estimates. We also conducted our comparison on a substantially older age group— beneficiaries aged 80 to 84—and found similar results. Fourth, CMS stated that our legal finding—that the agency exceeded its authority by allowing plans in the Medicare PPO Demonstration to cover certain services only if beneficiaries obtained them from the plans’ network providers—should be discussed in the context of the demonstration’s objectives. The agency agreed with our recommendation that Medicare PPO Demonstration plan participants be instructed to remove impermissible restrictions on enrollees’ access to providers for all covered plan benefits, and not just those covered under parts A and B, but did not provide a date by which the recommendation would be fully effectuated. CMS stated, however, that the legal finding needed to be viewed in the context of the policies the agency intended to advance through the Medicare PPO Demonstration. CMS reiterated many of the factors that it believes discouraged the offering of PPO plans in the M+C program, and said that the agency wanted to provide flexibility in the demonstration in order to facilitate participation by plans. CMS indicated that it had taken sufficient measures during the Medicare PPO Demonstration qualification process to ensure that all demonstration plans provided enrollees with adequate access to network providers for all covered services, and all plans were required to offer some out-of-network coverage. In addition, the agency indicated that all PPO plans were required to provide full disclosure to enrollees concerning the costs for in- network and out-of-network services. CMS had already identified for us many of the reasons that led it to implement the Medicare PPO Demonstration in the manner in which it did, and we included them in this report. The context within which CMS believes the legal finding must be placed is not relevant to the issue of whether CMS exceeded its authority. The waiver authority at issue is limited, and its use must conform to those limits. CMS’s reiteration of the policy objectives the demonstration was intended to achieve, its explanations for why some plans did not cover all plan services out of network, and its discussion of the measures that it took to ensure adequate access to services and enrollee education are not relevant considerations and do not make CMS’s actions any less unlawful. We are sending copies of this report to the Administrator of CMS and appropriate congressional committees. The report is available at no charge on GAO’s Web site at http://www.gao.gov. We will also make copies available to others upon request. If you or your staffs have any questions, please call me at (202) 512-7119. Another contact and staff acknowledgments are listed in Appendix III. In 2003, the Centers for Medicare & Medicaid Services (CMS) initiated the Medicare Preferred Provider Organization (PPO) Demonstration. To facilitate participation in the demonstration, CMS permitted organizations participating in the demonstration (demonstration participants) to require their enrollees to obtain specified services, including services covered by parts A and B of the Medicare program, only from “network” providers in order to be covered. As discussed below, we believe that CMS’s decision to permit demonstration participants to restrict enrollees’ choice of providers exceeded its authority and was, therefore, unlawful. The Balanced Budget Act of 1997 (BBA) established a new part C of the Medicare program, known as the Medicare+Choice (M+C) program, adding sections 1851 through 1859 to the Social Security Act (act). Under section 1851(a)(1) of the act, every individual entitled to Medicare part A and enrolled under part B may elect to receive benefits through either the Medicare fee-for-service program or a part C M+C plan, if one is offered where he or she lives. In general, M+C organizations must provide coverage for all services that are covered under parts A and B of Medicare. M+C organizations also may include coverage for other health care services that are not covered under parts A and B of Medicare. They may satisfy their coverage obligations by furnishing services themselves, arranging for enrollees to receive services through contracts with providers, or by reimbursing providers who furnish services to enrollees. Section 1851(a)(2) of the act authorizes several types of M+C plans, two of which are relevant to the Medicare PPO Demonstration: “coordinated care plans” and “private fee-for-service plans.” M+C coordinated care plans include health maintenance organization (HMO) plans, with or without point of service options, and PPO plans. As defined by CMS, coordinated care plans have a CMS-approved network of providers under contract or arrangement with the M+C organization to deliver health care to enrollees. M+C organizations offering coordinated care plans may specify the network of providers from whom enrollees may receive services if they demonstrate that all covered services are available and accessible under the plan. Unlike most other coordinated care plans, PPO plans must provide coverage for all covered benefits out of network. Generally, PPO plans require enrollees to pay additional costs for services furnished by providers outside the network. Section 1859(b)(2) of the Social Security Act defines the term “private fee- for-service plan” for purposes of the M+C program. As defined, private fee- for-service plans are required to reimburse hospitals and other providers on a fee-for-service basis without placing the providers at financial risk. These plans may not vary the amounts paid based on the number or volume of services they provide. Moreover, in contrast to coordinated care plans, private fee-for-service plans are not required to have networks of providers; instead they must allow enrollees to obtain covered services from any provider who is lawfully authorized to provide them and who agrees to the terms and conditions of payment, regardless of whether the provider has a written contract with the plan to furnish services to enrollees. While many of the statutory and regulatory requirements governing M+C plans are similar, others vary by plan type. M+C organizations generally must be licensed as “risk-bearing entities” by the states where they offer M+C plans. HMO plans and most other coordinated care plans, however, are subject to more stringent quality assurance requirements than PPO and private fee-for-service plans. For example, HMO plans are required by statute to implement programs to improve quality and assess the effectiveness of such programs through systemic follow-up and to make information on quality and outcomes measures available to beneficiaries to facilitate comparisons among health care options. These requirements do not apply to private fee-for-service and PPO plans. HMO plans, as well as other coordinated care plans, are also held to more extensive access requirements than private fee-for-service plans to ensure timely access to care. Finally, although M+C PPO plans generally are held to less stringent quality assurance standards than other coordinated care plans, M+C organizations licensed as HMOs that offer M+C PPO plans may not avail themselves of the less stringent quality assurance standards applicable to M+C PPOs. Instead, a licensed HMO that offers an M+C PPO plan must comply with the quality assurance standards applicable to HMOs. CMS is authorized by section 402(a)(1)(A) of the Social Security Amendments of 1967 to conduct demonstrations designed to test whether changes in methods of payment or reimbursement in Medicare and other specified health care programs would increase the efficiency and economy of those programs without adversely affecting the quality of services. Section 402(b) authorizes CMS to waive requirements related to payment or reimbursement for providers, services, and other items for purposes of demonstration projects, but does not authorize the agency to waive requirements unrelated to payment or reimbursement. Section 402(b) also authorizes CMS to pay costs in excess of those that would ordinarily be payable or reimbursable, to the extent that the waiver applies to these excess costs. According to CMS, the agency initiated the 3-year Medicare PPO Demonstration in January 2003 to make the PPO health care option, which had been found to be successful in non-Medicare markets, more widely available to Medicare beneficiaries. Its objective was to introduce more variety into the M+C program so that Medicare beneficiaries would have more options available to them. In addition, CMS believed that the PPO demonstration plans would introduce incentives that would result in more efficient and cost-effective use of medical services. CMS entered into contracts with all demonstration participants. To facilitate HMO participation in the Medicare PPO Demonstration, CMS permitted licensed HMOs, as well as all other demonstration participants, to offer private fee-for-service plans. Exercising its authority under section 402(b), CMS waived statutory and regulatory payment requirements applicable to private fee-for-service plans, allowing the participating organizations to vary the amount of payments among providers, among other things, so that the plans offered would more closely resemble PPO plans. As a result, M+C organizations with HMO licenses were able to establish PPO-type plans and were not subject to the more stringent quality assurance standards applicable to HMOs and most other coordinated care plans. The private fee-for-service plan model contract provided that requirements that were not expressly waived by CMS would remain in effect during the term of the contract. Nevertheless, CMS approved plan provisions that required enrollees to obtain various items and services, including those covered under parts A and B of Medicare, from “network” providers. CMS officials told us that prospective demonstration participants had expressed concerns about their ability to determine appropriate payment rates for providers who were not under contract with the demonstration participant, and that the agency had decided to afford demonstration participants flexibility in this area in order to get the demonstration project underway. CMS officials also indicated that they had encouraged the demonstration participants to cover all benefits “out of network” before the end of the demonstration period. Notably, guidance issued by CMS to assist M+C organizations, including demonstration participants, in developing plan brochures for 2004 contained specific instructions for demonstration participants to indicate in their brochures if they do not cover all Medicare benefits “out of network.” The Social Security Act places restrictions on private fee-for-service plans’ authority to limit enrollees’ selection of providers. Specifically, section 1852(d)(4) requires an organization offering an M+C private fee-for-service plan to demonstrate that the plan affords sufficient access to health care providers by showing that it has established payment rates that are no lower than the corresponding rates under the Medicare fee-for-service program or that it has contracts with a sufficient number of providers to provide covered services, or both. That section also provides that the access standards may not be used to restrict the persons from whom enrollees may obtain covered services, thus suggesting that private fee-for- service plans are not authorized to limit their enrollees’ selection of providers, for example, to those within an established “network.” The definition of the term “private fee-for-service plan” at section 1859(b)(2) echoes this provision, stating that such plans do not restrict the selection of providers from among those who may lawfully provide covered services and agree to accept the terms and conditions of payment. In 42 C.F.R. § 422.114(b), we specify that the plan must permit the enrollees to receive services from any provider that is authorized to provide the service under original Medicare. This implements that part of section 1852(d)(4) that says that the access requirements cannot be construed as restricting the persons from whom enrollees of the M+C private fee-for-service plan may obtain covered services. In light of the statutory language and CMS’s interpretation, we conclude that Medicare PPO Demonstration plan provisions limiting enrollees to “network” providers are inconsistent with sections 1852(d)(4) and 1859(b)(2) of the act. Because these sections are unrelated to payment, CMS was not authorized to waive them in connection with the Medicare PPO Demonstration. Further, the plans’ exclusions of coverage for services furnished by “non- network” providers are incompatible with statutory requirements designed to ensure quality of care to enrollees in M+C plans. As discussed earlier, private fee-for-service and PPO plans participating in the M+C program are held to less stringent quality assurance standards than HMOs and certain other coordinated care plans. The applicability of less stringent quality assurance standards is due, in part, to the increased choices enrollees in private fee-for-service and PPO plans have in comparison to enrollees in most other types of plans. CMS has expressly recognized this rationale for the distinction among various types of plans. In connection with an M+C rulemaking on the matter, CMS responded to a concern that private fee- for-service plan quality assurance requirements were inadequate to protect enrollees by explaining that quality assurance standards may not be as important in the case of private fee-for-service plans “in which the enrollee has complete freedom of choice to use any provider in the country, and is not limited to a defined network of providers.” CMS’s approval of restrictions on enrollee choice and simultaneous failure to apply the more stringent quality standards applicable to HMO and most other coordinated care plans were inconsistent with the statutory framework under which M+C plans are required to operate. Moreover, while CMS stated that the demonstration was intended to offer beneficiaries greater choice by encouraging the availability of PPO-type plans, regulatory provisions applicable to M+C PPO plans would have precluded demonstration participants from requiring enrollees to obtain services only from “network” providers as a condition of coverage. CMS has defined a PPO plan, in part, as a plan that “provides for reimbursement for all covered benefits regardless of whether the benefits are provided within the network of providers.” (Emphasis added). Since this regulatory provision is not related to payment or reimbursement, section 402(b) of the Social Security Amendments of 1967 would not have authorized CMS to waive it in connection with the Medicare PPO Demonstration. In its written response to our inquiry about the demonstration, CMS indicated that the demonstration plans’ conditioning coverage of “Medicare-covered services” (those services covered under parts A & B of Medicare) on their being furnished by “network” providers violates statutory access requirements applicable to private fee-for-service plans. CMS explained, however, that while it had reviewed all plans to ensure that services covered by parts A and B of the Medicare program were covered “in network,” some organizations had indicated that they were unable to cover certain services “out of network” because of the complexities associated with determining payment for “out-of-network” providers. CMS, nevertheless, believed that “the basic principle of out-of- network access was satisfied” because “the demonstration products offer access to most Medicare-covered services.” CMS also denied that it had waived applicable access requirements, stating that it did not have the authority to do so. CMS indicated that it will instruct demonstration participants that they must provide out-of-network coverage for all “Medicare-covered services” in 2005, the third year of the Medicare PPO Demonstration, if they wish to continue to avail themselves of the quality assurance standards applicable to private fee-for-service plans. CMS also indicated, however, that it will not require plans to provide out-of-network coverage for other covered benefits for which the demonstration plans provide only in-network coverage. CMS did not provide a legal basis for distinguishing between Medicare-covered services and other plan services with respect to a demonstration plan’s obligation to provide “out-of-network” coverage. We disagree with CMS’s assertion that it did not waive the statutory requirements at issue. CMS knowingly permitted organizations participating in the demonstration to operate in a manner that was inconsistent with sections 1852(d)(4) and 1859(b)(2) of the Social Security Act. The agency’s decision to do so achieved a result for demonstration participants that CMS acknowledges it did not have the authority to provide. Therefore, we view CMS’s action as tantamount to a waiver. We also conclude that all benefits covered under a PPO demonstration plan, not just services covered under parts A and B, must be covered “out of network” by demonstration plans. The Social Security Act defines a private-fee-for service plan, in part, as a “Medicare+Choice plan” that “does not restrict the selection of providers among those who are lawfully authorized to provide the covered services.” A “Medicare+Choice plan,” for purposes of the definition of a private fee-for-service plan, is defined, in part, as “health benefits coverage offered under a policy, contract, or plan by a Medicare+Choice organization.” Furthermore, CMS guidance also provides that enrollees in M+C private fee-for-service plans can obtain “plan covered health care services from any entity that is authorized to provide services under parts A and B and who is willing to accept the plan’s terms and conditions of payment.” The act, therefore, does not distinguish between Medicare covered services and other covered services in specifying the private fee-for-service plan’s obligations to cover plan benefits. Section 402(b) of the Social Security Amendments of 1967 provides CMS with waiver authority, but also limits that authority by providing that the agency may only waive requirements related to payment or reimbursement. In connection with the Medicare PPO Demonstration, CMS overrode the limitation contained in section 402(b), tacitly waiving statutory provisions unrelated to payment. As a general matter, agencies may not override statutory limitations on their activities by administrative action. Therefore, we conclude that CMS’s decision to allow demonstration participants to restrict enrollees’ access to providers for any services covered by the plans exceeds its authority and is, therefore, unlawful. This appendix provides additional information on the key aspects of our analysis. First, it describes the Centers for Medicare & Medicaid Services’ (CMS) administrative data sources we used to assess demonstration preferred provider organization (PPO) enrollment and plan participation. Second, it describes the CMS data sources we used to compare estimated beneficiary out-of-pocket costs between six types of coverage. Third, it describes CMS data sources used to compare 2003 benefits between the six types of coverage. Fourth, it describes CMS data we used to estimate the proportion of expected 2004 annual out-of-pocket costs and cost sharing when demonstration PPO enrollees utilize services outside of plan provider networks. Fifth, it describes how CMS estimated the effect of demonstration PPOs on Medicare spending. Finally, it addresses data reliability issues and limitations. We used the following CMS administrative data sets to identify the number of eligible Medicare beneficiaries and enrollment by health plan in each county where demonstration PPOs operated: the Geographic Service Area (GSA) file for October 2003, the Medicare Managed Care Plan Monthly Report for October 2003, and the Medicare Managed Care Contract (MMCC) report of 2003. Because the focus of our analysis was on plans available to Medicare beneficiaries at large, we used plan enrollment data from GSA to exclude demonstration PPO and Medicare+Choice (M+C) plans that were employer-only plans; cost plans; and demonstration plans only available to specific beneficiaries such as Medicare dual-eligibles. Demonstration PPO and M+C plan county data from GSA were also used to construct our county-level U.S. map. To compare out-of-pocket costs for beneficiaries, we used administrative data from GSA and CMS’s 2003 Medicare Health Plan Compare (MHPC) data set to identify private health plans. For each plan in each county, we then used CMS’s 2003 Medicare Personal Plan Finder (MPPF) to obtain estimated monthly out-of-pocket costs. We then averaged these costs across counties for enrollees in demonstration PPOs, M+C health maintenance organizations (HMO) and M+C PPOs, M+C private fee-for- service (PFFS) plans, Medigap plans F and I, and fee-for-service (FFS) Medicare. First, we used data from MHPC to identify one plan offered by each organization in each county where demonstration PPOs were available. Because organizations may offer numerous options for each plan, each with its own benefit package and premium, we selected the one option that was most favorable for beneficiaries in each service area. Selecting one option for each plan may have resulted in underestimated actual beneficiary out-of-pocket costs for beneficiaries in some health plans. In addition, we established a sample group of 41 counties containing approximately 90 percent of all demonstration PPO enrollment. This sample group includes the 21 counties where Horizon Healthcare of New Jersey’s demonstration PPO plan was available, and the 23 counties that made up 80 percent of enrollment in demonstration PPOs other than Horizon. Next, we used estimated beneficiary out-of-pocket cost data from CMS’s MPPF to calculate the 2003 average monthly out-of-pocket costs for enrollees in demonstration PPOs and the other types of coverage. CMS, and its contractor Fu Associates, Ltd. (Fu), estimated all costs related to covered and noncovered benefits when an enrollee utilizes services within the plan’s network of providers. We calculated average monthly out-of- pocket costs for beneficiaries aged 65 to 69 for each type of coverage, in each county, and across all health statuses. We weighted the estimates of demonstration PPOs, M+C HMO and M+C PPO plans, M+C PFFS plans, and Medigap plans F and I by the distribution of health statuses of the beneficiary cohorts used to create Fu’s estimates, and the number of eligible Medicare beneficiaries in each county. We separated M+C PFFS plans from M+C HMOs and PPOs, because the out-of-pocket costs of enrollees in M+C PFFS plans tended to be substantially higher than the other two types of M+C plans. We used CMS’s 2003 MHPC administrative data set in conjunction with CMS’s 2003 guide to “Choosing a Medigap Policy” to compare the benefit packages for enrollees in demonstration PPOs, M+C HMOs and M+C PPOs, M+C PFFS plans, Medigap plans F and I, and traditional FFS Medicare. We compared prescription drug coverage and inpatient hospital services for each type of coverage using our sample of plans in 41 counties. We selected the one plan option for each plan that appeared most favorable to beneficiaries. We also compared prescription drug coverage between these types of plans in a sample of 16 counties where at least one demonstration PPO and one M+C plan offered prescription drug coverage as a part of their benefit package. In addition, data from CMS’s Health Plan Management System (HPMS) were used to compare the non- network benefits offered by each demonstration PPO to the 2003 network benefits offered by demonstration PPOs. CMS’s Office of the Actuary (OACT), which projects trends in Medicare spending, provided the data we used to compare the proportion of expected 2004 gross annual out-of-pocket costs and cost sharing when demonstration PPO enrollees utilize services inside and outside of plan provider networks. The data we obtained were submitted by plans to OACT as part of their annual revenue and medical expense projections and contained estimates of per member per month gross medical costs and target medical loss ratio (MLR) for 2004. We contacted OACT to verify that we possessed a submission for each of the 20 demonstration PPOs in our sample of 41 counties. To determine the effects of demonstration PPOs on Medicare spending, we used projections developed by OACT and conducted interviews with OACT staff. To arrive at these projections, OACT compared how much Medicare would pay demonstration PPOs per enrollee with the amount Medicare would spend on those beneficiaries if the demonstration did not exist and those beneficiaries were instead enrolled in M+C plans or FFS Medicare. OACT also estimated the effect that risk-sharing agreements signed between CMS and demonstration PPOs had on Medicare spending. We used a variety of CMS data sources in our analysis; October 2003 GSA file, October 2003 Monthly Report, October 2003 MMCC, 2003 MPPF, 2003 HPMS, 2003 MHPC, and the estimated 2004 Medicare PPO Demonstration plan medical cost files. In each case, we determined that the data were sufficiently reliable for our purposes in addressing the report’s objectives. We verified the reliability of the administrative data we used to determine enrollment figures—CMS’s GSA, M+C Monthly Report, and MMCC—by comparing the list of unique demonstration PPO contract identification numbers and organization names to CMS’s list of participating demonstration PPO plans and organizations. We did not find any discrepancies between the two lists. We worked closely with CMS staff and Fu to verify the validity of out-of-pocket cost estimates from the 2003 MPPF. We verified that the results of our out-of-pocket cost analysis were consistent with CMS’s initial tests of its own data, and that our methodology, in conjunction with its methodology, did not introduce bias. In addition, we worked with CMS to verify the validity of the 2004 Medicare PPO Demonstration plan medical cost files submitted by the health care organizations by assuring that the information they provided to us corresponded with our data for the sample of 41 counties. We identified three potential limitations of our analysis; however we have addressed these limitations through conversations with CMS and Fu, and by using the best available data. Our report focuses on the results of our analysis of estimated enrollee out-or-pocket costs for beneficiaries aged 65 to 69. We also obtained similar results when we analyzed estimated enrollee out-of-pocket costs for beneficiaries aged 80 to 84. In addition, we verified with CMS and Fu that the trends associated with the 2003 out-of- pocket costs of the 65 to 69 age group were similar to the out-of-pocket costs of Medicare beneficiaries aged 70 to 74. Second, for our out-of- pocket cost analysis, we used national FFS Medicare estimates, rather than county-level estimates, because county-level estimates were not available. Based on our conversations with CMS and Fu, we believe that CMS’s national figures were more accurate than adjusting the national estimates to the county level using national FFS spending in each county. Third, while county-level Medigap out-of-pocket costs and benefit package information were not available to us, we used CMS estimates of national Medigap out-of-pocket costs and standardized national Medigap benefits descriptions for our benefits comparison. James C. Cosgrove at (202) 512-7029. In addition to the person named above, key contributors to this report were: Yorick F. Uzes, Zachary R. Gaumer, Jennifer R. Podulka, Jennie F. Apter, Helen T. Desaulniers, and Kevin C. Milne. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
Preferred provider organizations (PPO) are more prevalent than other types of health plans in the private market, but, in 2003, only six PPOs contracted to serve Medicare beneficiaries in Medicare+Choice (M+C), Medicare's private health plan option. In recent years, the Centers for Medicare & Medicaid Services (CMS), the agency that administers Medicare, initiated two demonstrations that include a total of 34 PPOs. GAO (1) described how CMS used its statutory authority to conduct the two demonstrations, (2) assessed the extent to which demonstration PPOs expanded access to Medicare health plans and attracted enrollees in 2003, (3) compared CMS's estimates of out-of-pocket costs beneficiaries incurred in demonstration PPOs with those of other types of coverage, including fee-for-service (FFS) Medicare, M+C plans, and Medigap policies in 2003, and (4) determined the effects of demonstration PPOs on Medicare spending. CMS used its statutory authority to offer health-care organizations financial incentives to participate in the two demonstrations. CMS, however, exceeded its authority when it allowed 29 of the 33 plans in the second demonstration, the Medicare PPO Demonstration, to cover certain services, such as skilled nursing, home health, and routine physical examinations, only if beneficiaries obtained them from the plans' network providers. In general, beneficiaries in Medicare PPO Demonstration plans who received care from non-network providers for these services were liable for the full cost of their care. The demonstration PPOs attracted relatively few enrollees and did little to expand Medicare beneficiaries' access to private health plans. About 98,000, or less than 1 percent, of the 10.1 million eligible Medicare beneficiaries living in counties where demonstration PPOs operated had enrolled in the demonstration PPOs by October 2003. Further, although one of the goals of the Medicare PPO Demonstration was to attract beneficiaries from traditional FFS Medicare and Medigap plans, only 26 percent of enrollees in its plans came from FFS Medicare, with all others coming from M+C plans. About 9.9 million, or 98 percent, of the 10.1 million eligible Medicare beneficiaries also had M+C plans available in their counties. Virtually no enrollment occurred in counties where only demonstration PPOs operated. According to CMS's 2003 estimates, on average demonstration PPO enrollees could have expected to incur total out-of-pocket costs--expenses for premiums, cost sharing and noncovered items and services--that were the same or higher than those they would have incurred with nearly all other types of Medicare coverage. However, relative costs by type of coverage varied somewhat depending on beneficiary health status. For certain services and items, such as prescription drugs and inpatient hospitalization, demonstration plans provided better benefits relative to some other types of Medicare coverage. Although it is too early to determine the actual program costs of the two demonstrations, CMS originally projected that the first demonstration would increase Medicare spending by $750 per enrollee per year and the second demonstration would increase Medicare spending by $652 per enrollee per year. Based on the agency's original enrollment projections, which exceed 2003 actual enrollment, CMS estimated the demonstration PPOs would increase program spending by $100 million for 2002 and 2003 combined.
In 1986, the Air Force began developing TSSAM to provide a low observable conventional cruise missile. Key characteristics included long-range, autonomous guidance, automatic target recognition, and precision accuracy with a warhead able to destroy a well-protected structure. After the TSSAM procurement unit cost increased from an estimated $728,000 in 1986 to $2,062,000 in 1994 (then-year dollars), the Department of Defense (DOD) terminated the program. Following a comprehensive reassessment of force requirements, the Air Force and Navy agreed they urgently needed an affordable missile with most of TSSAM’s characteristics. They proposed a joint program that would build upon the lessons learned from TSSAM and more recent programs that use new acquisition approaches. On September 20, 1995, the Principal Deputy Under Secretary of Defense for Acquisition and Technology approved the initiation of the JASSM program, under Air Force leadership. It is to be developed, produced, and initially deployed over the next 5 years. The Air Force’s April 1996 schedule for JASSM development and early production calls for a 24-month competitive program definition and risk reduction phase beginning in June 1996 (milestone I); a 32-month engineering and manufacturing development phase beginning in June 1998 (milestone II); production of 75 low-rate initial production missiles beginning in production of 90 full-rate production missiles beginning in April 2001 (milestone III); and initial JASSM deployment in June 2001. Figure 1 shows the Air Force’s schedule for JASSM development, missile deliveries, and testing. The estimated development cost for the JASSM program is $675 million (fiscal year 1995 dollars). The Air Force plans to buy about 2,400 missiles at an average unit procurement price of $400,000 to $700,000 (fiscal year 1995 dollars). Based on these unit prices, we estimate the procurement cost for 2,400 Air Force missiles is $960 million to $1.68 billion, and the total estimated acquisition cost (development and procurement) is $1.64 billion to $2.36 billion. The Congress appropriated $25 million to start the JASSM program in fiscal year 1996, and the President’s fiscal year 1997 budget includes $198.6 million for the program. The JASSM Single Acquisition Management Plan has an overriding theme of affordability. To provide the required capability at an affordable cost, the Air Force plans to use a series of new acquisition processes and encourage industry to use commercial practices to lower the missile’s price and speed its deployment. To accomplish these challenging goals, the Air Force intends to establish a unique partnership with industry. In contrast to its past practice, the Air Force intends to minimize (1) the requirements to use military specifications and standards and (2) government oversight. The JASSM request for proposal, for example, is significantly shorter than for TSSAM and other past missile acquisition programs, focuses on the desired capability, and does not tell industry how to develop the missile. The Air Force intends to offer industry the maximum possible flexibility to apply commercial practices and innovation. During the 24-month competitive phase, JASSM program office personnel plan to join with contractor personnel to form problem-solving teams and help facilitate the development of the proposed missile design. The JASSM contractors are expected to modify an existing missile design, use available off-the-shelf technology, and use a variety of commercial business and technical practices. The use of commercial practices has been stressed to all potential JASSM developers. These initiatives are intended to lower development, production, and operational support costs, as well as reduce the time needed to develop and produce the system. The principal focus of the 24-month competition, for example, is to eliminate unnecessary cost and allow the contractors to trade off performance and other requirements. JASSM cost is as important as technical performance and schedule. Another innovation is that the contractor is expected to provide a lifetime, total system warranty for each missile. A similar approach using reforms and commercial practices is being used in a pilot program to acquire the Joint Direct Attack Munition (JDAM). Under this pilot program, the Air Force is developing a guidance system and steerable tail kit to significantly improve the accuracy of 1,000- and 2,000-pound general purpose bombs that are currently in its inventory. The JDAM program office is projecting at least a 50-percent reduction in the baseline average unit procurement price, which includes the cost of a full system warranty. Although the Air Force plans to rely upon existing missile designs and off-the-shelf technology to speed JASSM development, essential guidance and automatic target recognition technologies are not mature. The 2 to 3 years available before JASSM flight testing may not be sufficient time to fully develop, integrate, and test these complex subsystems. Because these technologies are essential to meeting the program’s requirements, their successful development is one of the pacing items of the program. JASSM is expected to use an inertial navigation system integrated with a global positioning system receiver to navigate from its launch point to the target area. This navigation system is expected to be low-cost, reliable, and accurate to about 13 meters. Global positioning system receivers, however, are vulnerable to both intentional and unintentional interference, including jamming. Recent studies by DOD and the Air Force describe how properly placed jammers can cause an unprotected global positioning system-aided weapon to entirely miss its target. One problem facing the engineering community is defining the potential jamming threat so that a cost-effective countermeasure can be developed. Although the Air Force is evaluating electronic and other countermeasures to develop an antijam capability, it appears a combination of techniques may be needed to ensure reliable and accurate missile guidance. Specially designed antennas and more rapid connection with global positioning system satellites are among the techniques being considered. The Defense Science Board recommended that DOD develop more accurate inertial navigation systems that do not rely on a global positioning system as much. The high cost of potential antijam devices or more accurate navigational systems has limited their use in precision-guided munitions. An Air Force laboratory is conducting a program to develop and test a global positioning system antijam system suitable for precision-guided munitions such as JDAM. Development of the antijam system began in 1995, and it is scheduled to be tested in fiscal year 1998. Assuming the threat uncertainties are resolved and the antijam system’s cost is acceptable, it could initially be added to JDAM. It may also be adaptable to JASSM and other precision-guided munitions. With this schedule, however, this system may not be available in time to meet JASSM’s June 1998 critical design freeze when the missile design is to be finalized. JASSM requires an automatic target recognition system for a true fire-and-forget precision attack capability under adverse weather conditions. Other programs, such as the Tomahawk, are trying to develop this technology, but no precision-guided munition available today has that capability. Although navigating to the target using a global positioning system-aided preprogrammed flight plan is a well-understood technology, reliably finding the target, and specifically, the desired aim-point without the aid of a pilot remains an unfilled DOD requirement. Precision accuracy in smoke, fog, and adverse weather conditions is a critical aspect of this technology that remains to be demonstrated in an operational system. Affordability and reliability are also important issues. The three basic sensor technologies that have been evaluated in laboratory studies are imaging infrared, laser radar, and synthetic aperture radar. All three appear suitable for JASSM, with laser and synthetic aperture radar technologies offering better adverse weather performance and easier, less costly mission planning. None of them, however, are mature enough to incorporate into an existing design today. All would require, in the opinion of Wright Laboratory engineers, intensive development in an actual weapon system program like JASSM to become a fully operational system ready for production. Wright Laboratory and JASSM program engineers estimated 2 to 3 years would be needed to develop, integrate, and flight test this technology. With a June 1998 design freeze, this does not appear to support the JASSM development schedule. According to JASSM program office officials, synthetic aperture radar technology could not be available in time for JASSM, and the availability of laser radar technology is questionable. They said an imaging infrared radar could be ready and is a likely candidate for JASSM. Based on our analyses of other programs, including TSSAM, developing imaging infrared technology and its associated mission planning elements in 2 years will be difficult. In an autonomous guidance system using an imaging infrared sensor, the system tries to match the sensor-detected image to a computer image of the target obtained earlier. Because the system relies on light intensity variation, the time of day, time of year, and atmospheric conditions are important and sometimes difficult to manage. Extensive laboratory testing by the Air Force has shown that such systems are error prone and unreliable. Also, the key problems of mission planning have not yet been satisfactorily resolved. JASSM is to ultimately be carried by a variety of Air Force and Navy aircraft, including F-16, F-15, and F/A-18 fighters and B-52H, B-1B, and B-2 bombers. Because these aircraft have different structural and electrical systems, JASSM must be designed to be compatible with all of them. For example, the missile can weigh no more than 2,250 pounds based on a F-16 and F/A-18 carriage limitation. The missile can be no longer than 168 inches for it to fit on the B-1B’s internal launcher. It must also be compatible with different electrical circuits and software systems applicable to the various aircraft launch platforms. Integrating a missile with multiple aircraft is a complex task and has taken other programs years of wind tunnel testing, fit checks, electrical and software analyses, and extensive flight testing. Numerous changes, for example, were made to TSSAM to accommodate the idiosyncracies of the same aircraft that are planned to carry JASSM. Attempts to integrate TSSAM with these same aircraft occurred over 8 years, yet not one aircraft was certified to carry the missile during that period. Availability of suitable test aircraft and stable electrical and software configurations were among the problems slowing the integration of TSSAM. The JASSM program office identified these same problems as potential risk areas. To speed JASSM’s development, the Air Force has decided to initially integrate the missile only with F-16 and B-52H aircraft. Later, as funds are available, separate programs are to complete integration with the remaining aircraft. While the program manager expects this plan to reduce the complexity of the integration task during JASSM development, we believe it adds technical risk and undisclosed future costs. Technical risk remains because compatibility evaluations are not sufficient to identify all potential integration issues. Also, the costs of integrating JASSM on several other aircraft (i.e., F-15, F/A-18, B-1B, and B-2) are not included in the $675-million development cost estimate. Moreover, as currently planned, postponing these difficult tasks until after the government and contractor development team is dispersed risks losing essential experience and expertise. The Air Force plans to buy 72 JASSMs during development and 75 during low-rate initial production. Of the 72 developmental missiles, 37 are to support the development test program, including initial operational test and evaluation, and 35 are to be pilot production missiles to demonstrate that the contractor can repeatedly produce quality missiles for no more than an average unit price of $700,000 (fiscal year 1995 dollars). If the Air Force revised its JASSM acquisition schedule and used the low-rate initial production missiles for proving the production process and for initial operational test and evaluation, it could reduce the number of developmental missiles and save about $25 million. The Air Force plans to begin manufacturing the 35 pilot production missiles in November 1998, or soon after the start of the 32-month development and testing phase. Conducting pilot production early in the development phase, however, increases schedule risk and may result in manufacturing missiles requiring design and production process changes after production begins. A similar pilot production program was used for the Advanced Cruise Missile program, and none of those missiles were similar enough to the final configuration that they could be updated and deployed at a reasonable cost. In the case of the Advanced Cruise Missile program, as the flight test program identified design and manufacturing deficiencies, many changes were made to the missile’s guidance set, sensor, actuators, and other subsystems; the program’s schedule slipped; and projected costs increased. According to DOD Regulation 5000.2, low-rate initial production is the minimum quantity necessary to (1) provide production-configured or representative articles for operational tests, (2) establish an initial production base for the system, and (3) permit an orderly increase in the production rate for the system sufficient to lead to full-rate production upon successful completion of operational testing. The regulation, therefore, contemplates that low-rate initial production missiles can be used for proving the production process and for initial operational test and evaluation. As now planned, however, 9 of the 37 developmental missiles will be used for initial operational test and evaluation and the 75 low-rate initial production missiles will be delivered only after this testing is completed. The 35 pilot production missiles, after proving the production line, will be used for additional testing, if needed, and to establish an early operational capability. If the JASSM acquisition plan is revised to eliminate the 35 pilot production missiles, the Air Force could reduce some of the overlap between development and production, as well as the associated cost and schedule risk. Also, using low-rate initial production missiles would reduce the number of development test missiles required. Each of the early missiles is expected to cost approximately $700,000; eliminating the 35 pilot production missiles would reduce development cost by about $25 million. To ensure that JASSM is affordable, the Air Force established an average unit procurement price goal ranging from $400,000 to $700,000 (fiscal year 1995 dollars). The $400,000 price is the program objective, while the $700,000 price is the threshold beyond which the Air Force would reevaluate continuing the program. We support the Air Force’s objective to acquire an affordable and capable replacement for TSSAM. However, we are concerned that the JASSM price is optimistic and could lead to acquisition problems as the program proceeds. As a critical parameter for the JASSM program, the average unit procurement price is firm and not expected to increase. In fact, the program manager has challenged interested contractors to achieve the $400,000 objective price, if possible. The Air Force believes the price objective is achievable if (1) JASSM is derived from an existing missile in a competitive environment and (2) the Air Force and contractor are able to realize savings by implementing acquisition reforms and using best commercial practices. The program office prepared a cost estimate that supports the $700,000 threshold price, and the Office of the Secretary of Defense is reviewing the estimate. Attaining a price within the $400,000 to $700,000 range will be the focus of the 24-month competition when the contractors are to trade off performance and other requirements to obtain the most cost-effective system possible. A similar process was used in the JDAM program, a guidance and steerable tail kit for general purpose bombs. The JDAM program office expects to reduce the average unit procurement price by at least 50 percent. Price goals were also proposed for other missile programs. For many of them, however, the average production unit price grew as the program matured. For example, the production unit price for TSSAM increased from an estimated $728,000 to about $2.1 million (then-year dollars). Although TSSAM’s production price grew more than other programs, many of those we examined experienced cost growth. Our comparison of the estimated unit prices for several missiles in DOD’s inventory and development disclosed that JASSM is expected to cost less yet provide significantly greater capability. Several precision-guided munitions in inventory and development cost more than the $700,000 average procurement price established for the JASSM program. Yet, none of these missiles have the automatic target recognition capability required for JASSM, which is expected to contribute significantly to the system’s cost. Missile systems that most closely approximate the capability expected of JASSM, such as the Navy’s Tomahawk and the Standoff Land Attack Missile-Expanded Response (SLAM-ER) missiles, cost significantly more. Others, such as the Air Force’s AGM-130 and AGM-142, do not have the range, accuracy, or carrier flexibility required for JASSM, yet they cost about the same as the JASSM threshold price or more. Also, none of these missiles has the lifetime, full service warranty planned for JASSM. Containing cost growth on other missile programs has led to some of the following acquisition problems: reduced performance and system capability, postponement of key capabilities until a later production block or reduced procurement quantities and higher unit price, and initiation of other programs to meet unfilled requirements. In time, cumulative efforts to reduce costs led to contractor, user, and congressional dissatisfaction. Some programs were cut back significantly, while others were terminated. The TSSAM program, for example, was terminated after nearly 8 years and $4.4 billion were invested, because of significant development difficulties and growth in its expected unit cost. Based on this extensive history of overrunning initial cost estimates, it is incumbent for DOD to watch this program closely. Although JASSM is a joint Air Force and Navy program, the Navy has not provided development funding and, until March 1996, did not require carrier operability. Also, integration with the F/A-18 is not planned during the development program. Further, none of the 2,400 missiles planned for procurement are intended for Navy use. This brings the Navy commitment into question. For JASSM to be carried on, stored within, and launched from an aircraft carrier or other ship, it must meet Navy environmental and supportability requirements. These requirements are significantly more demanding than those for a land-based missile system. They must be designed into the missile system. Adding them later, according to the program office, would require a basic redesign of the system and a production block change. Until recently, these characteristics were optional, but, after a March 1996 meeting between DOD, service, and contractor personnel, carrier operability became a firm requirement and is to be designed into JASSM. Integrating JASSM with the F/A-18 is not scheduled during the development program. This integration issue was debated by the Air Force and Navy during the formation of the JASSM acquisition plan. The issues appear to be funding, availability of test aircraft, and increased complexity of the development program. In March 1996, the Chief of Naval Operations committed to providing funds for F/A-18 integration, but this is not expected to occur during JASSM development. To meet the Air Force’s urgent need for JASSM, the program’s development and initial production is scheduled to achieve an initial deployment of the weapon system in late 2001, or about 5 years after the start of the program. While no other missile in DOD’s inventory provides all the capabilities planned for JASSM, several in inventory or development offer significant capability, particularly for the Navy. Accordingly, the services may have more time, if necessary, to develop and test JASSM without excessive schedule risk. The JASSM Mission Need Statement identifies an urgent need for a new missile, because current air-launched standoff weapons are very limited in number and do not provide the required capability. The Operational Requirements Document states that JASSM should provide the following required capabilities: autonomous guidance, precision accuracy, automatic target recognition, ability to destroy fixed hard and soft targets, carriage by the primary fighter and bomber aircraft, and survivability. According to Air Force officials at the Air Combat Command, JASSM is urgently needed because (1) the Command expected to have TSSAM in the year 2000; (2) until JASSM is deployed, Air Force bombers and fighters will have only a limited number of long-range missiles; and (3) available weapons are unable to destroy enemy command and control operations and integrated air defenses with acceptable attrition rates. Until this need is met, Command officials believe less cost-effective and less-capable alternatives will have to be used, resulting in potentially higher attrition rates for both the weapons and launch platforms. Although no existing weapon has all the characteristics planned for JASSM, several precision-guided munitions in the inventory or development have some of them. For example, the SLAM-ER will provide much of the range planned for JASSM. SLAM-ER does not have an automatic target recognition capability, but can achieve precision accuracy with pilot assistance. The Tomahawk missile, widely used during Desert Storm, can be launched hundreds of miles from a target to attack a specific building. The Navy has several thousand Tomahawk missiles in its inventory, and an improved version is being developed. Table 1 shows the characteristics and quantities of precision-guided munitions in inventory and development. In addition to developing JASSM, the Air Force and Navy are buying and/or modifying additional precision-guided munitions. For example, the Navy plans to modify 700 SLAMs to the SLAM-ER version, which is planned to have greater standoff range, lethality, and accuracy than SLAM. About 1,000 Harpoon missiles could also be upgraded to SLAM-ER missiles if they are needed. The Air Force has increased its procurement of AGM-142 and AGM-130 missiles and is modifying 200 nuclear Air-Launched Cruise Missiles to the conventional configuration. Although none of these weapons have all the characteristics planned for JASSM, the precision-guided munitions in the inventory and those planned to be added to the inventory in the next few years provide a strong capability for U.S. forces. Since several of these weapons were not available previously, this capability will be more effective than that used during the successful air campaign of Desert Storm. The difficulties of developing critical technologies and the potential of cost growth, as well as our view that this missile is not urgently needed, are real concerns. To minimize them, we believe that the progress of this program should be managed by accomplishment of significant events and not just to meet a tight time schedule. Therefore, we recommend that the Secretary of Defense ensure that (1) required autonomous guidance and automatic target recognition technologies are mature before finalizing the JASSM design, (2) the Air Force does not acquire the 35 pilot production missiles early in development without a demonstrated need for additional test missiles, (3) missiles used during planned initial operational test and evaluation are production-representative missiles, and (4) the Navy participates fully in the program so the final JASSM design meets both Air Force and Navy requirements. In commenting on a draft of this report, DOD agreed with three of the four recommendations. It agreed to ensure that essential technology is mature before finalizing the JASSM design, production-representative missiles are used for initial operational test and evaluation, and the Navy participates fully in the program so that the JASSM design meets the needs of both services. DOD did not agree with our recommendation that the Air Force not acquire the 35 pilot production missiles early in the development phase without a demonstrated need for more test missiles. DOD stated that the Air Force’s plans to use these missiles for maturing the production process, for certain tests, for flight test spares, and for an early deployment option were justified. Although we agree the pilot production missiles can serve all of these purposes, without a sense of urgency for fielding this weapon, we are not convinced that spending about $25 million for these missiles so early in the program is necessary. Early pilot production increases the risk of manufacturing missiles that require significant changes to make them deployable. We further believe DOD would be better served if low-rate initial production missiles were used instead. Low-rate initial production missiles can serve all of the purposes identified by the Air Force for the 35 pilot production missiles, and they can be deployed. The DOD response is included in appendix I. Because the Air Force’s plan to manufacture 35 pilot production missiles early in development increases schedule risk and results in buying developmental missiles that are not needed to support the planned test program, the Congress may wish to consider not providing the estimated $25 million for the 35 pilot production missiles. We reviewed Air Force and Navy requirements documents that are the basis for the JASSM program. We then reviewed the JASSM acquisition plans to determine if the program will fulfill the requirement. We reviewed historical cost data on existing missile systems to determine how planned acquisition costs compared to actual production costs. We identified Air Force and Navy precision-guided munitions that are in production or about to go into production to determine what interim capability will be available until JASSM becomes operational. We discussed precision-guided munition technology with research engineers to find out what capabilities are available now and what is still in development. We interviewed Air Force and Navy personnel concerning requirements and acquisition. We visited or spoke with personnel at the following locations: JASSM Program Office, Eglin Air Force Base, Florida; Air Force Air Combat Command, Langley Air Force Base, Virginia; Naval Air Systems Command, Arlington, Virginia; Air Force Headquarters, Washington, D.C.; Wright Laboratories, Wright-Patterson Air Force Base, Ohio; and Wright Laboratories, Eglin Air Force Base, Florida. We conducted our review between January 1995 and March 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense, the Air Force, and the Navy; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. Please contact Thomas J. Schulz, Associate Director, Defense Acquisitions Issues at (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report were Raymond Dunham, Matthew R. Mongin, and Gerald W. Wood. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO evaluated the Air Force's and Navy's Joint Air-to-Surface Standoff Missile (JASSM) program, focusing on: (1) the acquisition process; (2) schedule and cost risks; (3) the Air Force's plan to acquire 35 pilot production missiles; and (4) the Navy's commitment to the program. GAO found that: (1) the Air Force is using an innovative acquisition process to procure JASSM; (2) the Air Force expects JASSM contractors to modify existing missile designs, use off-the-shelf technology, and apply best commercial practices to their design and production work; (3) some crucial JASSM technologies may not be mature in time for them to be integrated into JASSM; (4) JASSM may be vulnerable to jamming, and the Air Force is trying to identify a cost-effective countermeasure; (5) a JASSM automatic target recognition capability is still under development; (6) the Air Force will phase in integration of JASSM with combat aircraft, undertaking separate programs to integrate the missile with each type of aircraft as funds become available; (7) the Air Force plans to acquire 35 pilot production missiles, but those missiles may not be needed for testing, and may not represent the actual production configuration; (8) the Air Force's unit price goal for JASSM is optimistic when compared to similar missile procurement programs; (9) the Navy has not provided JASSM development funding, but carrier operability is a firm JASSM requirement, and the Navy expects to commit funds for JASSM integration with the F/A-18 aircraft after JASSM development; and (10) the need for JASSM may not be as urgent as the Air Force believes.
Purchasing a home is one of the greatest financial undertakings of most American families. In 1994, about 3.5 million families and individuals bought homes. Another 2.5 million families and individuals refinanced the mortgages on their existing homes. A variety of public- and private-sector institutions are involved in helping borrowers to obtain the mortgage credit they need to purchase homes. These institutions include mortgage insurers, who insure lenders against all or some losses on home mortgages. The primary mortgage insurers are private mortgage insurers (PMI), the Federal Housing Administration (FHA), and the U.S. Department of Veterans’ Affairs (VA). Mortgage insurance is required primarily for borrowers with limited down payment funds. Of the approximately 6 million mortgages reported through the Home Mortgage Disclosure Act (HMDA) for purchasing homes and refinancing existing mortgages in 1994, most (4 million) were not insured. In fact, 55 percent or more of the mortgages originated annually in 1984 through 1994 were uninsured. Uninsured mortgages reached their peak level in 1992 at 80 percent of all mortgages originated, as shown in figure 1.1. Many home buyers (42 percent) who financed the purchase of a home in 1994 required mortgage insurance. This percentage is substantially higher than the percentage of all mortgages taken out for refinancing in 1994 that were insured (19 percent). Mortgage insurance is generally used when a borrower makes a down payment of less than 20 percent of the value of the home (when the mortgage has a loan-to-value (LTV) ratio greater than 80 percent). If a borrower does not repay an insured mortgage loan as agreed, the lender may acquire the property through foreclosure and file a claim with the mortgage insurer for all or a portion of its total losses (the unpaid mortgage balance and interest, along with the costs of foreclosure and other expenses). If a borrower does not have mortgage insurance and fails to repay a mortgage, the lender may acquire the property through foreclosure and is responsible for the full amount of losses it incurs. While FHA and VA are federal entities, PMIs are privately-owned companies regulated at the state level. FHA is a government corporation operated within the U.S. Department of Housing and Urban Development (HUD). However, its primary single-family mortgage insurance program, the Section 203(b) program, is supported by the Mutual Mortgage Insurance Fund, which requires no federal funds to operate. The Mutual Mortgage Insurance Fund is required by law to meet or endeavor to meet statutory capital ratio requirements; that is, it must contain sufficient reserves and funding to cover estimated future losses resulting from the payment of claims on defaulted mortgages and administrative costs. Cash flows into the fund from insurance premiums and from the sale of foreclosed property. The cash reserves in the fund have always been more than enough to cover the expenses incurred. In 1995, the fund had a negative credit subsidy of $309 million. Negative credit subsidies occur when the present value of estimated cash inflows to the government exceeds the present value of estimated cash outflows. However, if the fund were to deplete its reserves, the U.S. Treasury would have to directly cover lenders’ claims and administrative costs. VA’s single-family mortgage guaranty program does require federal funds each year. In 1995, this program received a credit subsidy of $684 million. Many insured single-family mortgages are ultimately sold to investors through the secondary mortgage market. In 1995, virtually all FHA- and VA-insured mortgages were sold to investors with the help of the Government National Mortgage Association (Ginnie Mae), which, like FHA, is also a part of HUD. Ginnie Mae guarantees securities backed by pools of FHA- and VA-insured mortgages. Specifically, Ginnie Mae guarantees that investors in Ginnie Mae securities will receive timely principal and interest payments. Most privately insured mortgages are sold by lenders to the Federal National Mortgage Association (Fannie Mae) or the Federal Home Loan Mortgage Corporation (Freddie Mac)—two government-sponsored enterprises (GSE) that sell securities backed by pools of mortgages to investors and hold other mortgages as investments. Like Ginnie Mae, Fannie Mae and Freddie Mac guarantee investors in their securities that they will receive their expected principal and interest payments. Fannie Mae and Freddie Mac’s charters require that loans they purchase with LTV ratios greater than 80 percent have some form of credit enhancement. Private mortgage insurance is the most common enhancement used for these high LTV loans. For an insured loan they have purchased, Fannie Mae and Freddie Mac assume the responsibility for foreclosure losses, if any, that a lender incurs above the amount of the claim paid by the PMI. Lenders use guidelines provided by the PMIs, FHA, and VA to determine whether a borrower is eligible for mortgage insurance through any of the insurers. These guidelines include maximum allowable LTV and qualifying ratios. Although some of the guidelines pertaining to FHA and VA mortgage insurance are set by the agencies, many are set by the Congress through legislation. Similarly, although some of the requirements for private mortgage insurance originate with the PMIs, others are set by Fannie Mae and Freddie Mac. When determining if a mortgage is eligible for private mortgage insurance, the requirements set by Fannie Mae and Freddie Mac are considered because many lenders want to sell insured mortgages through the secondary market. The requirements set by Fannie Mae and Freddie Mac include underwriting standards, insurance coverage requirements, and a maximum loan amount. In 1994, PMIs joined FHA and VA in offering insurance for mortgages with LTV ratios greater than 95 percent by creating special “affordable” programs. The affordable programs are for loans with LTV ratios of up to 97 percent. Generally, these affordable programs have more flexible underwriting than the standard program. Some PMIs also offer products through very specialized programs, sometimes administered in conjunction with another player, such as a state housing finance agency (HFA), with even more flexible terms than those available through the companies’ affordable programs. Between 1984 and 1994 (the latest year for which data are available), FHA’s share of all loans insured each year both for purchasing homes and refinancing existing mortgages has fluctuated between a low of 18 percent in 1984 and a high of 51 percent in 1987. PMIs’ share during the same period fluctuated between a low of 29 percent in 1987 to a high of 69 percent in 1984. VA’s share during this period stayed between 13 and 20 percent. The relative market shares of the mortgage insurers are shown in figure 1.2. Between 1986 and 1990, FHA was the largest insurer. The factors contributing to FHA’s large market share during these years may include an increase in FHA’s maximum loan limit in 1988 and economic downturns in some areas of the country. Except for FHA’s loan limit, the terms, such as maximum LTV ratio, under which FHA and VA mortgage insurance is available do not generally vary across different geographic locations, according to program guidelines. However, PMI companies may change the conditions under which they will provide new insurance in a particular geographic area to reflect the increased risk of losses in an area experiencing economic hardship. By tightening up the terms of the insurance they would provide, PMIs may have decreased their share of the market in economically stressed regions of the country. From 1990 through 1992, the share of the market insured by FHA fell. This decrease may be a result, in part, of increased premiums for FHA insurance implemented as a result of the Omnibus Budget Reconciliation Act of 1990 (P.L. 101-508). In 1992, the insurance premium for FHA mortgages decreased, which may have contributed to the rise in FHA’s market share that took place in 1992 through 1994. Throughout the period from 1991 through 1994, the PMIs had a greater share of all insured single-family mortgage originations than FHA or VA. In 1994, the PMIs’ share of all insured single-family mortgage originations was 48 percent, FHA’s was 35 percent, and VA’s was 17 percent. The federal government uses a variety of tools to promote homeownership, in addition to providing mortgage insurance through FHA and VA. As mentioned above, the secondary market institutions (Ginnie Mae, Fannie Mae, and Freddie Mac) help make capital available for mortgage lending. The Federal Home Loan Bank (FHLB) system also helps provide liquidity for lenders. The federal government insures deposits and provides advances to the thrifts and savings institutions that are members of the Federal Home Loan Bank system. Federal tax incentives, such as the home mortgage interest deduction that is available to homeowners of all income levels, also are designed to encourage homeownership. Federal programs and requirements that are designed to make homeownership affordable, particularly among households who may be underserved by the private market, are discussed in chapter 4. To obtain more information about the role of FHA’s single-family program in today’s housing finance system, the Chairman of the Subcommittee on Housing and Community Opportunity, House Committee on Banking and Financial Services, asked us to provide information on (1) the terms of products available through FHA’s Section 203(b) program in comparison with the terms of products available through the programs of the PMIs and VA; (2) FHA’s share of the home purchase mortgage market, the characteristics of home buyers using FHA mortgage insurance in comparison with other home buyers, and the portion of FHA borrowers who met certain qualifying ratios for private mortgage insurance; and (3) other federal programs and activities, besides FHA and VA, that promote affordable homeownership. To compare the terms available through FHA’s Section 203(b) program with the terms offered by PMIs and VA, we reviewed FHA’s and VA’s program regulations, FHA’s mortgagee letters, and the underwriting guidelines and marketing materials published by the PMIs. Although the private mortgage insurance industry is composed of eight companies, we limited our review to the guidelines of the six companies that accounted for 97 percent of all new mortgages that were privately insured in 1994. We also collected information on the insurance offered by PMIs through interviews with officials from four PMIs and from the Mortgage Insurance Companies of America (MICA). Because we wanted to compare the terms of private mortgage insurance that are most similar to the terms of FHA mortgage insurance, we focused on the affordable programs administered by the PMIs that may be used for loans with LTV ratios as high as 97 percent. We restricted our review of the terms of FHA single-family mortgage insurance to the Section 203(b) program because it is FHA’s primary single-family mortgage insurance program. We computed FHA’s share of the home purchase mortgage market and compared the home buyers using FHA insurance to other home buyers by using the following data sources: (1) data reported through the HMDA on the volume of mortgages made by lenders in 1994 and the income, race, and location of borrowers; (2) data reported by MICA on the volume of mortgages insured by PMIs in 1994 and the income, race, and location of borrowers; (3) data from the 1993 American Housing Survey (AHS) conducted by the U.S. Department of Commerce on the age of home buyers; (4) data from HUD on the LTV ratio of the loans insured by FHA in 1994; (5) data from the Monthly Interest Rate Survey conducted by the Federal Housing Finance Board on the LTV ratios of conventional loans made in 1994; (6) data from VA on the LTV ratios of loans insured by VA in 1994; and (7) data from the Mortgage Bankers’ Association (MBA) on first-time home buyers who took out mortgages in 1994. We used the data from the American Housing Survey, FHA, the Federal Housing Finance Board, VA, and the MBA because the HMDA and MICA data do not include data on the age of borrowers, LTV ratio, and first-time home buyers. We used 1993 and 1994 data because these were the most recent years for which data comparing FHA-insured loans with other types of loans were available. We did not attempt to examine trends over time in borrowers’ characteristics because our objective was limited to describing how FHA’s current clientele compares with other home buyers. We also limited our examination of borrowers’ characteristics to home purchase mortgages, excluding mortgages taken out to refinance existing mortgages. The data we used that were collected through HMDA do not include every mortgage made in 1994 for several reasons. First, not all lenders are required to report under HMDA. A depository institution is required to report if it has an office in a metropolitan area and its assets are at least $10 million. A mortgage company is required to report if it processes 100 or more mortgage applications. Second, the HMDA data we used do not include any mortgages made by lenders who do not lend in a Metropolitan Statistical Area (MSA). Third, the data do not include mortgages for non-owner-occupied homes. According to a Federal Reserve official, the HMDA data for 1994 include 77 percent of all home purchase mortgage loans. Single-family mortgages insured through FHA’s Section 203(b) program are not distinguished from mortgages insured through FHA’s other smaller single-family mortgage insurance programs in the HMDA data. However, 60 percent of all single-family loans insured through FHA in 1995 were made through FHA’s Section 203(b) program. Information on how FHA’s 203(b) mortgages compare to mortgages insured through the other FHA programs is presented in chapter 4. The data from MICA that we used pertain to nearly all of the mortgages insured by PMIs in 1994 through both standard and affordable programs. Although the MICA data pertain only to privately insured loans, the HMDA data consist of FHA-insured loans and all others. To compare FHA-insured mortgages to those insured by PMIs and to uninsured mortgages, in some cases we subtracted the number of loans reported to MICA from the non-government-insured loans in the HMDA data to obtain information about those loans that were uninsured. To take into account underreporting in the HMDA data, we reduced the MICA data by 23 percent when comparing the relative shares of the market for privately insured, FHA-insured, and uninsured loans. We also used AHS data, which were derived from a 1993 survey of approximately 65,000 households. The data we used are from the subset of these households that acquired a mortgage in 1993. We also reviewed existing studies conducted by officials at HUD, the Federal Reserve System, and the MBA. We looked at borrowers’ income, race, and first-time home buyer status because these characteristics have been highlighted by studies showing that certain types of borrowers, such as low-income borrowers, have difficulty obtaining mortgage credit. In addition, we looked at LTV ratios to compare the types of loans insured by FHA and by the PMIs, given differences in maximum allowable LTV ratios and the associated risk. We also looked at the state in which the loan was insured to identify geographic differences in the use of mortgage insurance. We could not compare the claim rates or loss rates of FHA-insured loans with other loans because data for privately insured and uninsured loans were unavailable. We determined how many of the loans that FHA insured would have met PMIs’ requirements through analysis of FHA home buyers’ characteristics and PMIs’ guidelines. Unlike the analyses described above for which 1994 and 1993 data were the most recent available, we were able to use 1995 data for this analysis because it only required information about FHA-insured mortgages. We asked FHA program staff to use their automated data to determine the percentage of 1995 FHA home buyers who reported on their loan application qualifying ratios and LTV ratios below the maximum levels generally allowed by PMIs (mortgages with LTV ratios no greater than 97 percent, total debt-to-income ratios no greater than 38 percent, and housing-expense-to-income ratios no greater than 33 percent). We obtained most of our information on other ways in which the federal government helps to provide affordable homeownership opportunities by reviewing published information and by discussing program features with program officials at the various departments and agencies involved. We did not verify the accuracy or the completeness of the data provided by program officials. We also reviewed program regulations and budget submission information. We provided a draft of this report to HUD, VA, Fannie Mae, Freddie Mac, and the MICA for their review and comment. We also provided excerpts from the draft report that pertained to their homeownership activities to the National Council of State Housing Agencies, the FHFB, the Neighborhood Reinvestment Corporation, and the U.S. Department of Agriculture’s Rural Housing Service. All nine agencies provided comments. We incorporated these comments, as appropriate, throughout the report. Some specific comments from HUD, MICA, and the FHFB are discussed at the end of chapters 3 and 4. Comments from MICA, the National Council of State Housing Agencies, the FHFB, and the Neighborhood Reinvestment Corporation are reproduced in appendixes III-VI. We performed our work from March 1995 through July 1996 in accordance with generally accepted government auditing standards. The single-family mortgage insurance programs of FHA, PMIs, and VA protect private lenders against all or some of the losses that might result from foreclosure. However, the products offered by these organizations differ in terms of the (1) maximum mortgage amounts and LTV ratios allowed; (2) underwriting standards for borrowers, such as the income-to-expense qualifying ratio requirement; (3) funds required at loan closing for such items as down payment and closing costs; and (4) dollar amount or percent of loss that each organization will pay lenders to cover the losses associated with foreclosed loans. While FHA’s maximum loan amount, effective January 1, 1996, is $155,250, PMIs can insure and VA can guarantee loans that exceed this amount. On the other hand, PMIs can insure loans with a maximum LTV ratio of 97 percent, while both FHA and VA can insure/guarantee loans with ratios exceeding 100-percent. In addition, both FHA and VA generally require borrowers to pay less cash at loan closing than PMIs require. Finally, FHA provides lenders with essentially 100 percent protection against losses from foreclosed loans, while both PMIs and VA protect against a portion of the losses. While this chapter does not discuss all of the differences that exist in the terms offered by the three organizations, the terms described were cited as the most significant by industry and FHA officials we interviewed. FHA offers a number of specialized insurance programs; however, because almost 60 percent of its home purchase loans in 1995 were made under the Section 203(b) program, our analysis of FHA addresses only this program. Similarly, PMIs insure loans under a variety of different programs. In general, PMIs insure mortgages using what they refer to as either standard or affordable housing loans. Affordable insurance programs differ from standard ones in that they offer more flexible underwriting guidelines than the PMIs’ standard programs in several areas such as LTV ratios, qualifying ratios, and reserve requirements. PMIs offer two types of affordable loans: (1) a maximum LTV ratio of 95 percent and (2) a maximum LTV ratio of 97 percent. The underwriting guidelines for the 97-percent LTV ratio are more restrictive than the 95-percent affordable program for things such as credit ratios. To compare the PMI programs that were most like FHA’s Section 203(b) program and VA’s loan guaranty program, we used the published underwriting guidelines for the PMI affordable loans and limited our review to the insurance terms for these loans. While both FHA’s and PMIs’ programs can provide insurance to any borrowers who meet the programs’ underwriting guidelines, VA’s loan guaranty program is limited to qualified veterans and their survivors. When deciding whether to finance a home with a loan insured by FHA or a PMI, or guaranteed by VA, the prospective home buyer must consider both the amount of money needed to purchase the home and the amount of the down payment or other cash needs required by the lender. The three organizations differ in the maximum loan amounts and cash requirements for closing a loan. While FHA is legislatively constrained by the dollar amount of loans it can insure, PMIs and VA are not. FHA’s maximum loan amount for a single family home is legislatively set at the lesser of 95 percent of the median house price in the area or 75 percent of the conforming loan limit for Freddie Mac. As of January 1, 1996, FHA’s maximum loan amount in the highest-cost areas was $155,250. Many areas are not at the highest cost; these areas’ maximum loan amounts range from $78,660 up to the high-cost limit. While PMIs can insure loans of any size, they have established loan limits for loans insured under their affordable housing programs. This limit differs depending on the company. For loans with a 97 percent LTV ratio, four of the PMIs specified $203,150 as their maximum loan limit when this was the conforming loan limit; one stated that the limit was $250,000; and the remaining PMI did not specify a loan limit. VA places no limit on the maximum loan that may be guaranteed, except that the mortgage may not exceed the home’s appraised value plus the VA funding fee, if it is financed. As a rule, however, lenders generally limit VA loans to four times the VA guaranty amount. Since the maximum VA guaranty is currently legislatively set at $50,750, VA loans will rarely exceed $203,000. FHA, PMIs, and VA differ in terms of their maximum allowable LTV ratios and how they calculate this ratio. FHA and VA allow higher LTV ratios than PMIs. The LTV ratio represents the ratio of the unpaid principal balance of the loan to the lesser of the appraised value or the sales price of the property. LTV ratios are important because of the direct relationship that exists between the amount of equity a borrower has in his/her home and the likelihood or risk of default. The higher the LTV ratio, the less cash a borrower is required to pay out of his/her own funds. However, the higher the LTV ratio, the less cash the borrower will have invested in the home and the more likely it is that he/she may default on the mortgage obligations, especially during times of economic hardship. Thus, while FHA and VA’s higher LTV ratios allow a home buyer to purchase a higher-priced home with less money, these loans have a greater risk of defaulting than PMI loans. The Omnibus Budget Reconciliation Act of 1990 (P.L. 101-508), enacted in November 1990, established LTV limits for FHA loans of 98.75 percent if the home value is $50,000 or less, or 97.75 percent if the home value is in excess of $50,000. However, because FHA allows financing of the up-front insurance premium, borrowers can in effect receive loans with LTV ratios that exceed 100 percent. The method of determining the maximum FHA mortgage amount requires two steps, as shown in table 2.1. The example assumes a home with a purchase price of $100,000 and closing costs of $2,300. We also assume that the purchase price is equal to or less than the appraised value of the property. In the above example, the lesser of the two amounts, $97,685, becomes the maximum mortgage allowed. However, this is not the final mortgage amount if the borrower also finances the up-front, 2.25-percent insurance premium in the mortgage ($2,198 in the example above). The total mortgage in this case is $99,883 ($97,685 plus $2,198), or an LTV ratio of 99.9 percent of the purchase price. PMIs, on the other hand, establish maximum LTV ratios for loans they insure, which means that any cost above this amount must be paid at closing. Essentially, all six of the PMIs allow LTV ratios up to a maximum of 97 percent. In general, loans having a maximum LTV of 97 percent fall under the PMIs’ affordable housing loan programs. The PMIs began to insure loans with a 97-percent LTV ratio in 1994. Loans with a 95-percent LTV ratio can be made under the affordable loan guidelines also, as well as under the companies’ regular loan programs. For VA loans, the maximum LTV ratio is 100 percent of the lesser of the amounts shown on the “certificate of reasonable value” issued by VA, or the “notification of value” issued by the lender if processed under VA’s Lender Appraisal Processing Program, or the selling price, plus the VA funding fee. The certificate or notification is issued in response to an appraisal request for the determination of reasonable value from a veteran, lender, builder, or owner. The certificate or notification is used to notify the requester of the maximum amount VA will guarantee. Table 2.2 illustrates how each entity calculates its LTV ratio and summarizes the LTV calculations for each of the three entities. The example assumes a $100,000 purchase price (appraisal value) and a 30-year fixed-rate loan at 7.5 percent interest. As shown in table 2.2, the VA loan has an LTV ratio of 102 percent, the FHA loan has an LTV ratio of 99.9 percent, and the two PMI loans have LTV ratios of 95 percent and 97 percent—the lowest of the LTV ratios. These results reflect differences between the three organizations in their maximum allowable LTV ratios as well as their requirements for down payment and the financing of closing costs and insurance premiums. When underwriting mortgage loans, FHA, PMIs, and VA all require that lenders examine a borrower’s ability and willingness to repay the mortgage debt by examining the borrower’s qualifying ratios and credit history. Differences exist in the qualifying ratios allowed among FHA, PMIs, and VA. However, differences in the written requirements for credit history examination among insurers were minimal. Both FHA and PMIs use two qualifying ratios to determine whether a borrower will be able to meet the expenses involved in homeownership. The “housing-expense-to-income ratio” examines a borrower’s expected monthly housing expenses as a percentage of his or her monthly income; and the “total-debt-to-income ratio” looks at a borrower’s expected monthly housing expenses plus long-term debt as a percentage of his or her monthly income. VA’s underwriting standards are different in that they use the total-debt-to-income ratio in combination with an estimate of adequate monthly “residual income” when determining borrowers’ qualifications for a home loan. VA defines residual income as gross monthly income less federal taxes and other monthly expenses. In qualifying a borrower, VA’s underwriting guidelines establish a maximum total-debt-payment-to-income ratio and a minimum monthly residual income requirement. The monthly debt-payment-to-income ratio for VA borrowers is set at 41 percent. To qualify a borrower under VA’s residual income method, housing (including mortgage payments) and other monthly payments are subtracted from the borrower’s net take-home pay. Net take-home pay is gross income less federal income taxes. The remaining value is the residual monthly income for family support. VA provides a table of residual monthly incomes by region based on the Department of Labor’s consumer expenditure surveys. VA provides the residual income tables as a guide to qualify borrowers; however, VA states that these figures should not automatically trigger approval or rejection of a loan. Table 2.3 summarizes the housing-expense-to-income ratios and total debt-to-income ratios acceptable to the three organizations. Each of the three organizations give examples of compensating factors, which may allow the borrower to exceed the maximum qualifying ratios or residual income figures in the case of VA. Examples of the compensating factors provided in the various underwriting guidelines include a large down payment; the demonstrated ability of the borrower to devote a greater portion of income to housing expense; substantial cash reserves; the borrower’s net worth is substantial enough to evidence an ability to repay the mortgage regardless of income; evidence of an acceptable credit history or limited credit use; less-than-maximum mortgage terms; funds provided by a health, welfare, or community service organization for unusual services, house repairs, etc.; and a decrease in monthly housing expenses. In addition to the use of qualifying ratios to determine a borrower’s ability to repay the mortgage debt, FHA, PMIs, and VA also require that a borrower’s credit history be evaluated to determine his or her willingness to handle financial obligations in a timely manner. For these organizations, past credit performance serves as the most useful guide in determining a borrower’s attitude toward credit. A borrower who has made payments on previous or current obligations in a timely manner represents reduced risk. Conversely, if the credit history, despite adequate income to support obligations, reflects continuous slow payments and delinquent accounts, the organizations require that strong offsetting factors should exist for the loan to be approved. The guidelines of FHA, PMIs, and VA are very similar in their approach and requirements for determining satisfactory credit. For all three, it is the overall pattern of credit behavior that must be examined rather than isolated occurrences of unsatisfactory or slow payments. A period of financial difficulty in the past does not necessarily make the risk unacceptable if a good payment record has been maintained since. For any derogatory items found, the PMI or lender must determine whether the late payments were due to a disregard for, or an inability to manage financial obligations, or to factors beyond the control of the borrower. All three organizations allow a good deal of judgment and interpretation on the part of the underwriter in determining the creditworthiness of the prospective borrower. The use of information from national credit reporting agencies is required by all three organizations. However, they also allow lenders to use alternative methods of establishing credit histories for borrowers who do not have the type of credit history that would appear on a credit report. Other types of information that can be used include histories on the payment of utilities and rent. Given the similarity between the three entities’ credit standards and the fact that the standards are applied using judgment and interpretation, it was not possible, when comparing stated credit requirements, to determine which, if any, of these entities requirements are more, or less, stringent than the others. Such a determination would require a number of individual case studies to determine how specific borrowers would be judged when applying for a loan insured by a PMI versus FHA or VA. All three organizations require prospective home buyers to pay certain costs at the time of loan closing. Funds required to close a loan include down payment, closing costs, and premium/fee charges. In addition, five of the six PMIs require the home buyer to have cash reserves of 1 or 2 months’ principal-interest-taxes-insurance after loan closing if it is a 97-percent LTV ratio loan. These differences are important because the amount of cash needed by the borrower at loan closing, to meet either closing costs or reserve requirements, represents a major barrier to homeownership for lower-income and first-time home buyers. Table 2.4 shows the money a borrower will need at closing, including reserves, if needed, to purchase a $100,000 home assuming $2,300 in closing costs and a minimum down payment. As can be seen, VA requires the least amount of cash at closing ($2,300) while a PMI 95 percent LTV loan requires the most ($7,362). As stated earlier, the down payment needed for an FHA loan depends on the calculation of the maximum mortgage amount, which in our example discussed previously is $97,685. FHA allows the entire down payment to be a gift. In addition, up to 95 percent of the closing costs on an FHA loan can be financed through the mortgage. Under their affordable programs, all of the PMIs require a minimum down payment of 3 percent from a borrower’s own funds. Additional funds to be used for a larger down payment or for closing costs can come from a variety of sources, such as gifts or grants from family members, nonprofit organizations or public agencies; unsecured loans; or secured loans. VA does not require a down payment, and all of the closing costs must be paid in cash at closing. However, the borrower can include the VA funding fee in the mortgage. FHA, PMIs, and VA all charge the borrower an insurance premium, or guaranty fee, to cover potential losses on mortgage loans that go into foreclosure. These organizations differ in the amount of premiums they charge to borrowers, the type of premium plans they offer, and whether or not these costs can be financed in the mortgage. FHA charges both a single, up-front premium as well as an annual premium for all mortgages over 15 years. The up-front premium is equal to 2.25 percent of the maximum mortgage allowed. It also can be financed as part of the mortgage and is partially refunded if the loan is paid in full during the first 7 years. The annual premium is equal to .5 percent of the outstanding mortgage balance and is charged for a time period that depends on the LTV ratio of the loan.If the LTV calculation is less than 90 percent of the property’s assessed value, the annual premium is charged for 11 years. If the LTV exceeds 90 percent, the premium is charged for a full 30 years or the full length of the loan, whichever is less. FHA’s premium schedule was established by the Omnibus Budget Reconciliation Act of 1990. PMIs charge different premiums to individual borrowers on the basis of the risk posed by that borrower. Premium rates will differ on the basis of factors such as the type of mortgage instrument that the borrower selects (i.e., fixed rate/adjustable rate), the purpose of the loan (i.e., home purchase/refinance), the LTV ratio, the length of the loan (30/25/15 years), and the amount of coverage that is required. PMIs also offer borrowers several different ways to pay premiums. However, PMI company representatives we interviewed stated that most borrowers choose to pay premiums under a monthly premium program because it allows them to pay less cash at closing. Under a monthly premium plan, only 1 month of the mortgage insurance premium is due at closing rather than for a year or more, as in other PMI plans. Since the PMI insurance premium is not paid up-front, there is no refund due if the insurance is canceled. On the other hand, FHA and VA borrowers pay an up-front premium. While the FHA mortgage insurance and VA loan guaranty remain in effect over the life of the mortgage, PMI mortgage insurance can be canceled if the unpaid principal balance of the mortgage has been paid down to either 80 percent of the original value of the property or 80 percent of the current appraised value of the property. The amount of the funding fee charged by VA at loan origination depends on whether the veteran is a first-time or repeat borrower, the amount of the down payment, and whether or not the borrower is a reservist. Currently, for first-time use, the fee for loans with less than 5 percent down is 2 percent; with at least 5 percent down, 1.5 percent; and with at least 10 percent down, 1.25 percent. For eligible reservists, 0.75 percent is added to the above amounts. Repeat borrowers must pay a 3-percent fee for loans with less than 5 percent down. In addition, borrowers must pay the entire funding fee at loan closing. However, the fee can be financed as part of the mortgage. Reserves represent the amount of monthly principal-interest-taxes-insurance that a borrower must have accumulated in savings at the time of loan closing. For loans with a 97-percent LTV ratio, the amount of reserves required by the PMIs differed, depending on the company. One PMI required 2 months reserves, four required 1 month, and one did not require any reserves. In addition, two of the PMI companies that required 1 month in reserves stated that these reserves could be waived under certain circumstances, such as if the property had a satisfactory mechanical and structural inspection and/or a homeowner’s warranty. For loans with a 95-percent LTV ratio, PMIs generally do not require any reserves. FHA does not require a reserve, and VA’s guidelines make no mention of them. FHA, PMIs, and VA differ in the amount of insurance or guaranty they provide to protect lenders against the losses associated with loans that go to foreclosure. Losses generally include the unpaid principal balance and delinquent interest due on the loan, legal expenses incurred during foreclosure, the expense of maintaining the home, and any advances the lender made to pay taxes or insurance. While FHA essentially protects against almost 100 percent of the losses associated with a foreclosed loan, PMIs and VA protect only against a portion of the loss. For PMIs, the type and amount of coverage selected by the lender determine how much the private mortgage insurer will pay if the borrower defaults and the lender must foreclose. Typically, this amount is limited to between 20 percent and 30 percent of the losses but can go as high as 35 percent. The amount that the VA guarantees against loss depends on the original loan amount, is set by law, and has been periodically increased by the Congress. Currently, the VA guaranty is as follows: (1) for loans up to $45,000, the VA will guarantee 50 percent of the loan; (2) for loans greater than $45,000, but not more than $56,250, the guaranty will not exceed $22,500; (3) for loans of more than $56,250 and not more than $144,000, the guaranty will be the lesser of 40 percent of the loan or $36,000; and (4) for loans of more than $144,000, the guaranty is the lesser of 25 percent of the loan or $50,750. Table 2.5 illustrates the amount that FHA, PMIs, and VA would have to pay for a loan that is foreclosed on at the end of the fourth year of a mortgage. As shown in the table, FHA suffers a loss of $44,500 on the foreclosed loan as opposed to $36,000 for VA and $30,449 to $31,068 for PMIs. The differences in losses are primarily related to the limits on insurance or guaranty coverage provided by each organization. Those losses not incurred by the mortgage insurers become the responsibility of the lenders. FHA’s additional losses of about $9,000 to $14,000 reflect the additional risk exposure that FHA assumes from insuring the lender for close to 100 percent of the loss. PMIs have recently begun to offer a number of specialized or affordable housing programs with more flexible underwriting guidelines and higher LTV ratios than their standard mortgage insurance programs. However, the terms offered by FHA and VA still differ in important ways from those offered by PMIs. These differences affect the size of mortgage loans that borrowers can obtain, the amount of cash needed by borrowers at loan closing, and the exposure to risks assumed by these organizations. PMIs can insure and VA can guarantee mortgage loans that exceed those that can be insured by FHA. More importantly, however, FHA’s and VA’s underwriting standards reduce the amount of cash needed to purchase a home to a greater extent than the PMIs’ new affordable standards. While the additional funds needed to purchase a PMI-insured home would not eliminate the borrower’s ability to purchase a home at some point in time, it can delay the purchase date substantially or require the borrower to purchase a less costly home. Primarily because PMIs and VA both limit their losses to a portion of the loss and FHA does not, losses on FHA-insured homes that enter foreclosure are greater than the losses experienced by VA and PMI on similar homes, all other things being equal. The additional loss for FHA reflects the additional risk exposure that it assumes from insuring the lender for close to 100 percent of the loss. It should be stressed, however, that under FHA’s Section 203(b) program, FHA borrowers’ premiums pay for these losses, not the U.S. Treasury. FHA is a major participant in the housing market. It insured nearly 15 percent of all reported home purchase loans made in 1994 and 35 percent of the insured loans and fulfills a larger role in some specific market segments, particularly low-income, first-time home buyers and minorities. On the basis of the loan-to-value and qualifying ratios of the FHA loans made in 1995, most of the FHA-insured loans would probably not have been made by private mortgage insurance companies. However, the largest number of home purchase loans made during 1994, including the loans to low-income and minority borrowers, were not insured. More than twice as many uninsured loans were made to low-income and minority borrowers as were made by FHA. This chapter compares the statistics for FHA, the PMIs, VA, and uninsured mortgages in the housing market. It provides information on FHA’s share of loans made to low-income, first-time home buyers and minority borrowers, as reported in the Home Mortgage Disclosure Act (HMDA) data. To the extent that data are available, a comparison of the characteristics of insured borrowers (FHA, PMIs, and VA) to uninsured borrowers in the housing market is also presented. Besides income, race, and first-time home buyers, the characteristics discussed include the borrowers’ age, the location of the home, and the LTV ratio of the loan. This discussion covers all single-family home purchase loans originated in 1994. FHA insured 14.7 percent (519,102) of all home purchase loans made in 1994 and included in the HMDA data. HMDA recorded 6.1 million loans made in 1994, including about 4.1 million loans that were not insured. A large portion of the 6.1 million loans made were refinanced loans (42 percent, or 2.5 million) and are not included in the analysis in this chapter because we wanted to focus on the characteristics of home purchase mortgage borrowers. Figure 3.1 shows the type of insurance obtained on the 3.5 million home purchase loans included in the HMDA data. We estimate that the PMIs insured 20.5 percent, or 725,188, of these loans made in 1994. VA guaranteed another 6.2 percent of the mortgage market. Rural Housing Service (RHS) guaranteed less than 1 percent of home loans in 1994. The largest share of the home purchase loans, which we estimate to be about 58.5 percent or 2.1 million, was uninsured. (Table I.3 in app. I lists total home purchase loans reported by HMDA, by type of insurance.) We estimate that FHA insured about one-fifth of the approximately 1.1 million home purchase loans in the 1994 HMDA data made to low-income borrowers. We estimate that FHA insured more of these loans (20.1 percent) than the PMIs (16.2 percent) or VA (6.7 percent), as shown in figure 3.2. FHA’s share of home purchase loans made to low-income borrowers was higher than its 14.7 percent share of the housing market. However, more than half of the home purchase low-income borrowers were uninsured; the percentage of uninsured home purchase loans made to low-income borrowers is about the same as for all borrowers. (A break down of all loans made in 1994 by income classification is contained in table I.4 of app. I.) Figure 3.3 shows the proportion of each insurer’s 1994 home purchase loans that were made to low-income borrowers. Forty-two percent of the home purchase loans that FHA made were to low-income borrowers. For PMIs, a smaller percentage (24.5 percent) of the home purchase loans they insured were for low-income borrowers. Thirty-four percent of VA home purchase loans and 30 percent of the uninsured home purchase loans were made to low-income borrowers. As discussed previously, FHA generally does not insure loans over the FHA loan limit (maximum amounts were $151,725 for most of 1994 and $155,250 in 1996 for the areas with the highest housing costs), while PMIs and VA can insure higher-value loans. Consequently, it is useful to know how the PMIs would compare with FHA when facing similar constraints. When comparing the home purchase loans that the PMIs made under the FHA loan limit, the share of the PMIs’ business that is composed of low-income borrowers increases. According to data obtained from the Federal Reserve Board, in 1994, 33 percent of the PMI home purchase loans that were less than the FHA loan limit were to low-income borrowers. These data also indicate that 45 percent of FHA’s business in 1994 was to low-income borrowers, similar to the 42 percent we found in the HMDA data. Figure 3.4 shows the percent of each insurer’s home purchase loans made at various borrowers’ income ranges. As shown, FHA and VA loans are more concentrated in the lower income ranges compared with PMI and uninsured loans. The Mortgage Bankers Association also reports that the average income of FHA borrowers for calendar year 1993 was lower than the average income of the PMIs’ borrowers. (Table I.5 in app. I lists the percentage of each group’s loans that fall within the listed income ranges.) According to GAO’s estimate of the HMDA data, about 613,550 home purchase loans were made to minorities in 1994. FHA insured more loans for minority borrowers in 1994 than the PMIs and substantially more than VA. FHA insured about 147,423 minority loans in 1994 compared with about 114,197 insured by PMIs. Figure 3.5 shows the relative share of the insured and uninsured minority market. Although the majority (2.9 million, or 83 percent) of all home purchase loans made in 1994 were to white borrowers, FHA’s 1994 loans consisted of a larger share of minority loans than any other insurer. As shown in figure 3.6, 28 percent of FHA loans were to minorities. For the PMIs, 16 percent of their loans were to minorities, while of the VA loans made in 1994, 22 percent were to minorities. Even though only 15 percent (303,477) of 2.1 million uninsured loans were made to minority borrowers, 49 percent of all minority loans made were uninsured. (Table I.6 in app. I lists the number of minority and nonminority home purchase loans made, by insurance type.) In addition, the Federal Reserve reported that in 1994, home purchase loans below the FHA loan limit made by PMIs were about 21 percent minority, while FHA home purchase loans were about 26 percent minority. When considering loans made under the FHA loan limit, FHA is still more concentrated with minority borrowers than the PMIs. FHA insured a higher percentage of loans for first-time home buyers than its share of the market in 1994. However, non-FHA-loan providers made about four times as many loans (79 percent) as FHA (21 percent) to first-time home buyers in 1994. According to the Mortgage Bankers Association (MBA), in 1994, about 4.6 million home purchase mortgage loans were made in the home mortgage market. Of these loans, FHA insured 15 percent (686,487). Furthermore, MBA reported that about 2.2 million loans were made to first-time home buyers in 1994; FHA insured 21 percent of these loans (see fig. 3.7). The MBA report does not distinguish between private insured and uninsured mortgages. These are combined with VA loans and make up the Non-FHA group shown in figure 3.7. (Table I.7 in app. I lists the number of first-time home buyers for FHA and non-FHA home purchase mortgages.) Compared with others in the home mortgage market, on average, FHA made a higher proportion of loans to first-time home buyers. MBA’s data show that about 67 percent of all 1994 FHA home purchase borrowers were first-time home buyers and 44 percent of non-FHA home purchase borrowers were first-time home buyers. Figure 3.8 shows the share of FHA and non-FHA loans made to first-time home buyers. In addition to the study by MBA, HUD’s Policy Development and Research Division also analyzed the characteristics of loans. The HUD study reported that for the period 1989-91, 66 percent of FHA home purchase borrowers were first-time home buyers, while 56 percent of PMIs’ home purchase borrowers were first-time home buyers. They also report that 87 percent of FHA’s first-time home buyers are 40 years old or younger. According to GAO’s analysis of the 1993 American Housing Survey, FHA insured 9 percent of home purchase mortgages received in 1993 prior to the survey. This is lower than its 14.7 percent share of the home purchase mortgage market in 1994 in the HMDA database. The age distribution of borrowers, as indicated in the 1993 AHS data, shows that FHA borrowers tend to be younger than other borrowers. Sixty-two percent of FHA home purchase borrowers are less than 40 years old, while only 38 percent of the conventional home purchase loans were obtained by borrowers under 40. According to GAO’s analysis of the AHS data, of the 2.9 million home purchase loans made in 1993 prior to the survey, 41 percent were made to people under the age of 40 and 14 percent of them were insured by FHA. This is a larger share of this market segment than FHA’s share of the entire market for 1993. Conventional loans (private insurers and uninsured groups combined) had 82 percent of the younger-than-40 submarket, and VA provided the remaining 4 percent. (Table I.8 in app. I lists, by borrowers’ age, the number of FHA, VA, and conventional loans made prior to the 1993 AHS.) FHA’s relative share of the insurance market varied from state to state. According to the HMDA data for 1994, although PMIs insured more home purchase loans than FHA, FHA made more home purchase loans than the six PMIs combined in at least nine states—Arkansas, Maryland, Minnesota, Montana, Nevada, North Dakota, Oklahoma, Tennessee, and Utah. In all except 4 of the 50 states, FHA’s share is between 20 and 50 percent. In Iowa, Massachusetts, and Wisconsin, FHA’s business was less than 20 percent of the market. VA’s share of the insured market was the highest only in the state of Alaska. Figure 3.9 shows which of the three insurers—FHA, PMIs, or VA—made the greatest number of home purchase loans in each state during 1994. (Table I.9 in app. I lists FHA’s relative share of the insurance market in each state.) Of all home purchase loans made in 1994 with an LTV ratio of at least 90 percent, FHA insured 43 percent of them. The PMIs insured 37 percent and VA guaranteed 19 percent. This occurred even though the PMIs made 49 percent of the insured home purchase loans in 1994, FHA made about 35 percent, and VA made only 15 percent. Generally, a borrower is required to have mortgage insurance if the LTV ratio is above 80 percent. The LTV ratios of uninsured loans are generally below 80 percent. Figure 3.10 compares FHA, PMI, and VA in the high LTV ratio market. In addition to the number of loans made with high LTV ratios, there was also a difference between the proportion of such loans made by FHA and PMIs. This difference is demonstrated by figure 3.11, which shows that 88 percent of FHA’s loans had LTV ratios of at least 90 percent, compared with 55 percent of the PMI loans with such LTV ratios. In addition, up to 94 percent of VA loans guaranteed in 1994 had an LTV ratio greater than 91 percent. Furthermore, as shown in figure 3.12, 65 percent of FHA-insured loans had LTV ratios of 95 percent or greater, while only about 8 percent of PMI loans had LTV ratios greater than 95 percent. VA, shown separately in figure 3.13, has the vast majority of its guaranteed loans concentrated in the greater than 97 percent LTV range. (Tables I.10 and I.11 in app. I list the percentage of FHA, VA, and PMI home purchase loans within selected income ranges.) While some FHA-insured home purchase loans might qualify for private mortgage insurance, most might not have been written under the same terms by private mortgage insurers. Specifically, on the basis of the PMIs’ most liberal standards for (1) maximum LTV, (2) housing-expense-to-income, and (3) total-debt-to-income ratios alone, about two-thirds of FHA’s 1995 home purchase borrowers would not qualify for private mortgage insurance on the loans they received. That is, these borrowers had loans with LTV ratios greater than 97 percent, had housing-expense-to-income ratios greater than 33 percent, or had total-debt-to-income ratios greater than 38 percent. Conversely, about one-third of FHA’s single-family home purchase borrowers met the most liberal private mortgage insurance guidelines for LTV and total-debt-to-income ratios. These borrowers had loans with LTV ratios of 97 percent or lower, had ratios of housing-expense-to-income of 33 percent or lower, and had total-debt-to-income ratios of 38 percent or lower. This potential overlap in FHA-insured borrowers that may qualify for private mortgage insurance is shown as the shaded area in figure 3.14. In addition, relatively fewer FHA first-time home buyers and borrowers with low incomes met all three of these ratios. That is, while 34.1 percent of all home purchase FHA borrowers met all three ratios, only 22.6 percent of FHA’s first-time home buyers and 14.5 percent of FHA’s low-income home purchase borrowers met all three ratios. We cannot say with certainty that if an FHA borrower meets all three of the PMIs’ guidelines, a PMI would insure that borrower’s mortgage. Similarly, with the possible exception of LTV ratio, we cannot say categorically that an FHA borrower that does not meet any one ratio would not qualify for private mortgage insurance. Also, this analysis does not consider the credit history of a borrower, which lenders and insurers must consider when underwriting a loan. Furthermore, the process of underwriting mortgage insurance requires some judgment on the part of the lender and insurer, and the debt-to-income ratios we employ in this analysis may be exceeded if there are compensating factors. Finally, there are other features of FHA and private mortgage insurance that may influence a borrower’s choice of mortgage insurance. A borrower may have sufficient financial resources to qualify for private mortgage insurance and choose FHA insurance instead so that he or she may invest the funds saved in an asset other than his or her home. Therefore, some FHA borrowers whom we have identified as not being able to qualify for private mortgage insurance on the loan they received may have been able to increase their down payment, thereby lowering their LTV and total-debt-to-income ratios and qualifying for private mortgage insurance on a smaller loan. In its October 1995 report, HUD’s Office of Policy and Development found that most of FHA’s loans would not have been insured by PMIs because of differences in LTV or noncredit factors, even before the companies considered differences in personal credit history. About 68 percent of FHA borrowers in 1995 were within the most liberal PMIs’ guidelines for LTV ratio (had an LTV ratio less than 97). Conversely, about 32 percent of FHA borrowers in 1995 had LTV ratios that exceeded the maximum allowable LTV ratio under the PMIs’ most liberal guidelines. These borrowers with high LTV ratios may not have qualified for private mortgage insurance for the loans they received on the basis of their high LTV ratio alone. Under the PMIs’ recently initiated affordable programs, private mortgage insurers will insure loans with LTV ratios of up to 97 percent. Under their standard programs, private mortgage insurers will insure loans with LTV ratios of up to 95 percent. Only 37 percent of FHA borrowers in 1995 had LTV ratios of 95 percent or less, and an additional 31 percent had LTV ratios greater than 95 percent, but not greater than 97 percent. The share of FHA borrowers that had LTV ratios of 95 percent and below and 97 percent and below are shown by the shaded area in figure 3.15. For the ratio of total-debt-to-income, about 60 percent of FHA borrowers in 1995 could meet the most liberal guidelines established by private mortgage insurers. That is, these borrowers had monthly payments for all debt that was not greater than 38 percent of their monthly income. The PMIs’ standard programs include guidelines of 36 percent for this ratio. Almost half of the mortgages insured by FHA in 1995 would have met this more restrictive ratio. The share of FHA-insured loans made in 1995 that went to borrowers with ratios of total-debt-to-income no greater than the maximums published by the PMIs is shown as the shaded areas in figure 3.16. As discussed previously, those FHA borrowers that did not meet the PMIs’ most liberal guideline for this ratio would not necessarily be precluded from obtaining private mortgage insurance. Almost all borrowers that received an FHA-insured mortgage for the purchase of a house in 1995 had ratios of housing-expense-to-income that were within the published guidelines of private mortgage insurers. That is, under the affordable programs of the PMIs, a borrower may have a monthly housing debt up to 33 percent of his or her monthly income. Over 90 percent of FHA borrowers had housing debt that was within this guideline. Under their standard programs, the PMIs’ guidelines generally call for a ratio of housing-expense-to-income of no more than 28 percent. Three-quarters of FHA borrowers in 1995 met this guideline. That nearly all FHA borrowers meet the PMIs’ guidelines for housing-expense-to-income is not surprising because the guidelines established by the PMI companies are nearly the same as or more liberal than those of FHA for this particular ratio. The shaded areas of figure 3.17 show those borrowers that received an FHA-insured mortgage in 1995 who would meet the ratio for housing-expense-to-income found in the guidelines of the PMIs. As with the ratio for total debt to income, those FHA borrowers who did not meet the PMIs’ most liberal guideline for housing-expense-to-income would not necessarily be precluded from obtaining private mortgage insurance. FHA is a prominent player in the home mortgage loan market—particularly in certain market segments. The loans it insured in 1994 were concentrated to a greater extent on low-income and minority borrowers, first-time home buyers, and borrowers with high LTV ratios than the loans made by the PMIs. FHA was also the primary insurer in at least nine states. In addition, solely on the basis of the LTV and qualifying ratios of borrowers who obtained loans in 1995, most FHA borrowers might not have qualified for private mortgage insurance for the loans they received. Consequently, many FHA borrowers in 1995 may not have been able to obtain or could have been delayed in obtaining a home mortgage without the more lenient terms offered by FHA. While FHA is a prominent participant in the home purchase mortgage loan market, it is not the major source of loans to home buyers, nor is it the major source of loans to low-income and minority home buyers. The uninsured market, with about three times the number of loans that FHA had, made about twice as many loans to such borrowers as FHA did in 1994. An official in HUD’s Office of Policy Development and Research suggested revising the methodology used for some of the analyses described in this chapter. In response, we adjusted our analyses of home purchase loans made in 1994, which used data from HMDA and MICA, to reflect that the HMDA data pertained to about 77 percent of all loans insured in 1994, while the MICA data pertained to all privately insured loans. We also adjusted our analyses to recognize that some loan records in the two data sets were missing geographical location codes and consequently were not being drawn into some analyses. In response to comments from an Executive Vice President of MICA, we added explanations to this chapter about the federal liability associated with FHA’s Section 203(b) program and differences in the way FHA and private mortgage insurers calculate loan-to-value ratios. This official also commented that our report underestimates the (1) percentage of FHA borrowers who would qualify for a privately insured loan because compensating factors may enable a borrower to qualify even if he does not meet the ratios we considered and (2) importance of the role of Fannie Mae and Freddie Mac because it does not present those organizations’ criteria for purchasing loans. We disagree with these two comments. First, this chapter describes the limitations of the analyses presented on FHA borrowers who might qualify for private insurance. This analysis was not intended to determine with certainty how many of FHA’s borrowers would have qualified for a privately insured loan. To make such a determination would require considering many more factors than loan-to-value and qualifying ratios. Rather, as pointed out in this chapter, our analysis is intended to determine how many of FHA’s borrowers might have qualified for private mortgage insurance on the basis of the ratios alone. We point out further in this chapter that we cannot say definitively that an FHA borrower who does not meet all three private mortgage insurance guidelines would not qualify for private insurance. Similarly, if an FHA borrower meets all three private mortgage insurance guidelines, it cannot be said categorically that a private mortgage insurer would insure the borrower’s mortgage. In connection with the roles of Fannie Mae and Freddie Mac, we point out in this report that many guidelines pertaining to private mortgage insurance are set by the two secondary market institutions. We also point out that these requirements include underwriting standards, insurance coverage requirements, and maximum loan amount. Besides the FHA Section 203(b) and VA single-family loan programs described in chapter 2, the federal government promotes affordable homeownership through a complex web of at least 10 programs, through the requirements that it places upon the lenders and purchasers of mortgages and through individual tax incentives. Although these tools differ in their scope and technique, the federal government uses these and other tools to promote homeownership. In comparison with FHA’s Section 203(b) program, over half of the other 10 programs require direct federal funds, all reach fewer persons, and they generally direct a greater proportion of assistance to low-income home buyers (income less than/equal to 80 percent of an area’s median income). These programs provide home buyers with grants, direct loans, guaranties, interest subsidies, and other assistance in financing a home purchase, and in some instances they rely heavily upon FHA for mortgage insurance. These homeownership programs are run by the Departments of Agriculture, Housing and Urban Development, and Veterans Affairs; the Federal Home Loan Banks; state housing agencies; the Neighborhood Reinvestment Corporation, a government-funded corporation; and Neighborhood Housing Services of America, a government-funded nonprofit organization. The federal government also promotes affordable homeownership by placing upon certain lenders and the Federal National Mortgage Association (Fannie Mae) and the Federal Home Loan Mortgage Corporation (Freddie Mac) special requirements for meeting housing finance needs. Specifically, the Community Reinvestment Act (CRA) encourages depository institutions and other lenders to meet the housing credit needs of the communities they serve, and the Federal Housing Enterprises Financial Safety and Soundness Act of 1992 places upon Fannie Mae and Freddie Mac numerical goals for the loans they purchase that are made to low- and moderate-income persons and are made in underserved areas. Because of the difficulties experienced in implementing the CRA and the relative newness of the goals set for Fannie Mae and Freddie Mac, it may be too soon to judge the effect of these special requirements. Finally, through the mortgage interest deduction, one-time exclusion of capital gains, and other tax provisions, the federal government provides incentives for individuals to be homeowners. The Joint Committee on Taxation estimates that, for 1995, the mortgage interest deduction alone was the second largest tax expenditure that the government provides to individuals, totaling an estimated $53.5 billion—exceeding the total tax expenditures given to corporations. This chapter describes the federal programs that promote affordable homeownership and the applicable requirements recently placed upon Fannie Mae, Freddie Mac, banks, and thrifts. An analysis of the impact on homeownership of making it affordable through tax incentives given to individuals is beyond the scope of this study. Also, this chapter does not describe efforts that support homeownership in general, such as those of the secondary market, the bank system, and the fair lending requirements. In addition to the FHA and VA programs, many federal programs aim at affordable homeownership. The state housing finance agencies (HFA), through the use of tax-exempt mortgage revenue bonds (MRB), may provide subsidized financing for affordable homeownership. The Federal Home Loan Bank (FHLBank) System has its Community Investment Program (CIP) and Affordable Housing Program (AHP), which provide subsidies and subsidized or otherwise below-market-rate advances to member institutions to be used to fund affordable housing projects and loans to home buyers. The Department of Agriculture, through the Rural Housing Service (RHS), operates a subsidized direct loan program for low- and very-low income rural Americans and a guaranteed loan program for moderate-income rural Americans. The Department of Housing and Urban Development operates three grant programs—the Community Development Block Grant (CDBG) program, Home Investment Partnership Program (HOME), and Homeownership and Opportunity for People Everywhere that promote affordable homeownership. The Neighborhood Reinvestment Corporation (NRC), through its network of local organizations (NWO) and its secondary market organization—Neighborhood Housing Services of America (NHSA)—promotes affordable homeownership primarily through second mortgages and home buyer education. (See app. II for detailed descriptions of each program included in our analysis.) Even within FHA, there are homeownership programs other than the Section 203(b) program. For example, FHA also offers mortgages for individual condominium units under Section 234(c), rehabilitation mortgages under the Section 203(k) program, home equity conversion mortgages under Section 255, and homeownership counseling. The Section 203(b) program, however, is FHA’s principal means of promoting affordable single-family homeownership. In 1995, about 60 percent of all FHA single-family mortgages were made under the Section 203(b) program. For the purposes of this chapter, we provide data for all of the federal programs, including the FHA Section 203(b) program, which we use as a guide for describing the other programs. These programs assist homeowners by providing loans, guarantees, interest subsidies, help with down payments and closing costs, or other forms of assistance. This assistance may go directly to the homeowner or through an intermediary, such as a local government or nonprofit organization. A homeowner may benefit from more than one program. For example, HOPE 3 funds may be used to help with closing costs on a loan made by a state HFA. The state HFA may obtain funding from MRBs as well as FHLB System advances. The loan may be insured by FHA and securitized by the Government National Mortgage Association (Ginnie Mae). Each of the non-FHA/VA homeownership programs include some form of targeting—typically, the income of the borrower. In some cases, the programs also include restrictions on the location of the property, such as with rural loans, or repayment by the borrower of federal subsidies. Only the FHA, VA, and Rural Housing Service single-family loan programs are restricted by the size of the loans that may be insured/guaranteed/made.Table 4.1 describes the type of assistance provided to homeowners and the restrictions imposed by federal homeownership programs. Except for the programs of the Federal Home Loan Banks and the state HFAs—which, like FHA’s Section 203(b) loan program, require no direct federal funds—all of the other homeownership programs use federal funds. These federal funds are used to pay for the subsidies and assistance provided and the programs’ administration. For example, the VA received $684 million in budget authority for fiscal year 1995 for the subsidy and administrative costs of its direct and guaranteed loan programs. For the same year, the Congress appropriated $50 million for the HOPE programs, of which HUD allocated $20 million to HOPE 3. It appropriated $1.4 billion for the HOME program, of which about $238 million was used for homeownership activities. For the CDBG program, the Congress appropriated $4.8 billion; 70 percent of this amount is for the entitlement cities program. Of this, we estimate that seven-tenths of 1 percent, or about $24 million, may go toward homeownership assistance. Even the exceptions listed above are not necessarily without costs to the federal government. For example, the AHP and CIP programs of the FHLBank System are paid for through the system’s earnings, and according to the Finance Board, no FHLBank has ever suffered losses on its advances. However, the federal government has paid for liquidating insolvent member institutions that had benefited from the use of system advances, and the cost of liquidation may have been higher where advances permitted a troubled institution to incur larger losses than it may have otherwise incurred. Furthermore, the government’s past willingness to assist troubled government-sponsored enterprises means that it may bear the costs of most of the losses that such enterprises may suffer in the future. Also, there is a cost to the federal government of the state HFAs’ mortgage revenue bond program if one considers the lost revenues resulting from the tax-exempt status of the securities issued by these organizations to fund housing activities. The Joint Committee on Taxation estimates that the tax expenditure for the tax-exempt mortgage revenue bonds for owner-occupied housing was $1.4 billion for fiscal year 1995. Some of these programs also use nonfederal sources of funds. For example the Neighborhood Housing Services of America (NHSA) receives funding from private-sector, institutional investors through the sale of secondary market notes backed by loans purchased by NHSA. Both the HOME and HOPE 3 programs require matching contributions of 25 percent from nonfederal sources. Through 1995, the Federal Housing Finance Board (FHFB) has approved 23 state HFAs as nonmember mortgagees, which would allow them to obtain advances from the FHLBank System. The source of funds for each homeownership activity, including federal funds where appropriate, is shown in table 4.2. The amount of homeownership assistance provided by other federally supported programs varied widely between programs in terms of the dollars involved and the number of homeowners assisted. In all instances, FHA assisted a greater number of homeowners. In total, in a given year, these other programs may reach over 500,000 homeowners. In 1995, almost 570,000 homeowners received mortgage insurance through FHA insurance programs. During fiscal year 1995, 263,130 homeowners were assisted through the VA’s guaranteed loan program. The next greatest number of homeowners assisted was through the state HFAs, which made over 92,000 loans and issued almost 12,000 mortgage credit certificates in 1994, totaling over $9 billion. In contrast, Neighborhood Housing Services of America purchased 1,133 first and second mortgages in fiscal year 1995 totaling $47.7 million; and the HOPE 3 program had assisted 1,396 homeowners as of December 1995. The characteristics of the home buyers who were assisted were not always available for each of the programs. In general, where data were available, these other programs were more heavily concentrated in assistance provided to homeowners who were low-income, minority, and first-time buyers than was the case for FHA, which balances its risk by insuring a broad range of borrowers and thereby operates without federal subsidies. For example, 30 percent of the borrowers insured under FHA’s Section 203(b) program had incomes no greater than 80 percent of the area’s median income. Sixty-four percent of the homeowners assisted through state HFA mortgages and mortgage credit certificates and 69 percent of new homeowners assisted by the Neighborhood Reinvestment Corporation programs had low incomes. For the entitlement cities part of the Community Development Block Grant program and the HOME programs, the percentage of homeowners assisted who had low incomes was 94 and 100, respectively. For the Federal Home Loan Banks’ Affordable Housing Program and the HOPE 3 program, all homeowners assisted must have incomes no greater than 80 percent of the area’s median income. In connection with the race of the homeowners assisted through these various programs, all of the programs with available data served proportionately more minorities than did FHA, with the exception of the state and RHS programs. For example, while about 30 percent of the borrowers insured under the FHA Section 203(b) program were minorities, the state and RHS programs’ percentages were about 22 and 27, respectively, and the NRC, CDBG, HOME, and HOPE 3 programs’ percentages were 61, 65, 50, and 62, respectively. There were very few data on the percentage of assisted homeowners who were first-time home buyers. With the exception of the VA program, the other programs for which data were available reported higher percentages of first-time home buyers. The NRC reported that 97 percent of the homeowners assisted by NWOs were first-time home buyers. For the HOPE 3 program, all assisted homeowners must be first-time home buyers, and for the state MRB/MCC programs, applicants may not have owned a home in the last 3 years. For FHA’s Section 203(b) program, about 61 percent of the borrowers insured were first-time home buyers. Just over half of VA borrowers were first-time home buyers. (See table 4.3) The extent to which each of the non-FHA programs utilizes mortgage insurance is not completely known. Three programs provide mortgage insurance or a similar enhancement: VA and the Rural Housing Service guarantee mortgages, and seven state HFAs self-insure mortgages. Loans made by state HFAs are almost always insured, mostly by FHA. In 1994, FHA insured over 55 percent of the loans made by state HFAs. VA accounted for over 8 percent. While there were no data on the extent to which homeowners assisted through the FHLBs’ AHP and CIP programs had mortgage insurance, because member institutions may keep loans in their portfolio, they may not require mortgage insurance on these loans. The use of mortgage insurance on individual loans made by NWOs was not known, but both GE Capital Mortgage Corporation and Mortgage Guaranty Insurance Corporation provide mortgage insurance on special loan products offered through the NeighborWorks Ownership that allow for higher LTV ratios. Furthermore, the PMI Mortgage Insurance Company, provides pool insurance for first loans purchased from the Neighborhood Housing Services of America by the World Savings and Loan Association. The use of mortgage insurance in relation to two of HUD’s grant programs—CDBG and HOME—is not known. For the HOPE 3 program, about 19 percent of home buyers financed their home purchases using FHA insurance. The Federal government also promotes affordable homeownership through requirements that it places upon lenders and purchasers of mortgages. Specifically, the Community Reinvestment Act encourages certain lenders to meet the credit needs of the areas that they serve, including low- and moderate-income areas; and the Federal Housing Enterprises Financial Safety and Soundness Act of 1992 contains provisions that require Fannie Mae and Freddie Mac to meet certain goals related to the purchase of mortgages made to low- and moderate-income borrowers and in areas of low- and moderate-income. Both the lenders and Fannie Mae and Freddie Mac have taken actions to better meet the credit needs of low- and moderate-income home purchasers. However, given the difficulties experienced in implementing the CRA, and the relative newness of the social goals, it may be too soon to judge the impact these requirements will have on affordable homeownership. The Congress in 1977 enacted the CRA to encourage banks to provide credit to their entire market areas, including low- and moderate-income areas. The CRA requires federal bank and thrift regulators—the Federal Reserve Board, the Office of the Comptroller of the Currency, the Federal Deposit Insurance Corporation, and the Office of Thrift Supervision—to evaluate during periodic examinations the extent to which banks are fulfilling their lending, investment, and service responsibilities to their areas. In connection with lending, the regulator evaluates a bank’s record of helping to meet the credit needs of its area through its lending activities by looking at such indicators as the geographic distribution of the bank’s loans, including the incomes of areas and borrowers and the extent to which the bank uses innovative or flexible lending practices. On the basis of the results of these assessments, the regulators assign the banks one of four overall CRA ratings, ranging from “outstanding” to “substantial noncompliance.” The CRA is limited to depository institutions, such as banks and thrifts. These institutions originated about 46 percent of all home mortgages made in 1994. Mortgage companies—the primary providers of mortgages for single-family homes in 1994—are not subject to the CRA. An institution’s CRA rating may affect approval by the regulators of certain types of applications, an institution’s access to FHLB advances, and the public’s perception of the institution. The regulators are required to take a depository institution’s CRA rating into account when considering applications for expansions, such as mergers and acquisitions. In addition, a FHLBank System member’s access to the long-term advances used to finance residential mortgage lending is tied, in part, to its CRA rating. An institution’s CRA rating and related information must also be available to the public for review. Finally, the CRA affords community groups or other members of the public the opportunity to protest an institution’s application for establishing a deposit facility. The CRA and the fair lending laws have related objectives. The primary purpose of the CRA was to prohibit “redlining”—arbitrarily failing to provide credit to low-and moderate-income neighborhoods. The Fair Housing Act and the Equal Credit Opportunity Act prohibit lending discrimination that is based on certain characteristics of the potential and actual borrowers. In addition, the Home Mortgage Disclosure Act (HMDA) provides regulators and the public with information on mortgage applications. In November 1995, we issued a report analyzing the implementation of the CRA. Because of difficulties in implementing the CRA and the relative newness of reforms intended to address these difficulties, it may be too soon to judge the impact of the CRA. Yet even with the disagreements over the implementation of the CRA, bank and thrift regulators report some actions taken to better meet the needs of underserved communities. In connection with difficulties in implementing the CRA, we reported in November 1995 that because of the concerns of those lenders subject to the CRA about the burden it presents and the concern of community groups about the enforcement of the CRA, the regulators responsible for enforcing the CRA undertook a series of public hearings in 1993 and revised the regulations for the CRA in May 1995. We reported that some of the difficulties that have hindered past efforts to implement the CRA—differences in examiners’ training and experience, insufficient information to assess institutions’ CRA performance, and insufficient time for examiners to complete their responsibilities—will likely continue to challenge the regulators as they implement the revised regulations. According to bank and thrift regulators, despite the difficulties in implementing the CRA, it has played an increasingly important role in improving access to credit in communities, and many banks and thrifts, under the impetus of the CRA, have opened new branches, provided expanded services, and made substantial commitments to increase lending to all qualified borrowers within their areas. As we reported in November, some bankers may lower the relatively high transaction costs and perceived credit risks to individual institutions of community reinvestment loans by sharing those costs and risks through multi-institution programs. Regulators found that bankers who had effective CRA performance had undertaken initiatives such as borrower education and counseling, community outreach efforts, flexible underwriting standards or policies, and participation in government-sponsored lending programs. In addition, some major participants in the secondary markets have recently undertaken initiatives intended to make them more responsive to community development concerns, as discussed in the following section. The secondary mortgage market is the market in which mortgages and mortgage-backed securities are bought and sold. The Federal National Mortgage Association and the Federal Home Loan Mortgage Corporation are government-sponsored enterprises that operate a secondary mortgage market in which they purchase mortgages from lenders in exchange for cash or mortgage-backed securities/participation certificates. The mortgages that Fannie Mae and Freddie Mac may purchase are limited to those in amounts less than a legislative limit known as the conforming loan limit. This limit is adjusted on the basis of a formula; for 1996, the limit is $207,000 for single-unit, single-family residences. In addition, the GSEs are restricted to purchasing and securitizing only residential mortgages, are obligated to be active in the secondary market across the country at all times, and must comply with capital requirements and safety and soundness regulations issued by the Office of Federal Housing Enterprise Oversight. The GSEs face these restrictions on their activities for the benefits of their federal charter. An important indirect benefit is that investors perceive an implied federal guarantee on their obligations, which allows Fannie Mae and Freddie Mac to borrow at near-Treasury rates.Direct benefits include (1) $2.25 billion in conditional lines of credit with the Department of the Treasury, (2) exemptions from state and local corporate income taxes, and (3) exemptions from the Securities and Exchange Commission’s registration requirements for their securities. In addition to Fannie Mae and Freddie Mac, the secondary mortgage market is served by the Government National Mortgage Association (limited to securitizing federal government insured/guaranteed loans) and private conduits (private companies that purchase mortgages and sell mortgage-backed securities). In the first quarter of 1995, about 48 percent of outstanding single-family mortgage debt was held in mortgage pools. Fannie Mae and Freddie Mac accounted for about 62 percent this debt, Ginnie Mae for 27 percent, and private conduits for about 11 percent. The Congress requires Fannie Mae and Freddie Mac to support mortgage lending for low- and moderate-income persons and for residents of areas where home loans may be difficult to obtain. Their charters charge the GSEs with providing ongoing assistance to the secondary market for home mortgages—including the market for mortgages for low-and moderate-income families. More recently, the Federal Housing Enterprises Financial Safety and Soundness Act of 1992 required the Secretary of HUD to establish housing goals for Fannie Mae’s and Freddie Mac’s purchases of mortgages for low- and moderate-income families; housing located in central cities, rural areas, and other underserved areas; and special affordable housing meeting the unaddressed housing needs of targeted families. The act established interim annual goals for the 2-year period beginning on January 1, 1993. These annual goals were that (1) 30 percent of the total number of dwelling units financed by the mortgage purchases of the enterprise shall be for low- and moderate-income families; (2) 30 percent of the total number of dwelling units financed by the mortgage purchases of the enterprise shall be mortgages on properties located in central cities; and (3) the mortgage purchases of Fannie Mae shall include not less than $2 billion ($1.5 billion for Freddie Mac) in “special affordable” mortgages, split evenly between mortgages on single-family and multifamily housing. In October 1993, HUD published interim goals for the GSEs, setting the low- and moderate-income goal for Fannie Mae at 30 percent for 1993 and 1994. The goal for Freddie Mac was 28 percent for 1993 and 30 percent for 1994. HUD set the central cities goal for 1993 at 28 percent for Fannie Mae and 26 percent for Freddie Mac. Both had a goal of 30 percent for 1994. The goals for 1995 were kept at the level for 1994. The goals for 1993, 1994, and 1995 are shown in table 4.4. In February 1995, HUD proposed goals to increase Fannie Mae’s and Freddie Mac’s affordable housing purchase requirements. HUD issued the final regulations in December 1995, specifying, among other things, the goals for 1996. The goals for 1996 increased to 40 percent the portion of dwelling units for low- and moderate-income borrowers. The regulations set at 21 percent the central cities housing goal for 1996 and expanded the areas to be included in this goal to include rural and other underserved areas along with central cities. The special affordable housing goal for 1996 required that 12 percent of the total number of dwelling units financed by each GSE’s mortgage purchases are to be in mortgages for low-income families in low-income areas and very-low income families.Table 4.5 shows the affordable housing goals for 1996. In terms of what loans the GSEs purchase, an increasing proportion were made to persons in targeted income groups and locations during the first 2 years of the social goals. However, it may be too soon to judge the impact that the social goals ultimately may have. According to HUD’s data, the GSEs purchased a greater proportion of loans made to low- and moderate-income persons in 1994 than they did in 1993—up 10 percentage points for Fannie Mae and 9 percentage points for Freddie Mac. The same is true for loans made in central cities—up 6 percentage points for Fannie Mae and 1 percentage point for Freddie Mac. HUD further reports that the increases made in these goals appear to have been made without significant adverse impact on the GSEs’ financial condition. With the exception of the 1993 goal for central cities, Fannie Mae has exceeded its goals for 1993 and 1994. Freddie Mac was unable to meet the central cities goal for both years and was unable to meet the special affordable housing goal for multifamily housing for the period 1993 through 1994. For 1995, Fannie Mae met or exceeded each of its housing goals, and Freddie Mac exceeded the low- and moderate-income and special affordable housing goals but did not meet the goal for loans in central cities. In recent years, both GSEs have undertaken efforts to make more flexible their underwriting guidelines and develop new loan products that require less cash to obtain a home. For example, Fannie Mae’s Community Home Buyer’s Program allows borrowers to make a down payment of 5 percent from their own funds and to qualify with housing expense and total debt ratios of 33/38 (or higher with compensating factors). Fannie Mae recently added the Fannie 97 mortgage product to its community lending product line. Borrowers need only a 3 percent down payment from their own funds; family members, nonprofit groups, or government agencies are eligible to pay the closing costs. For a 30-year term, the qualifying ratios for a Fannie 97 mortgage product are the same as for the standard product—28/36. Freddie Mac’s Affordable Gold program provides for 95-percent LTV ratio loans with what is called a 3/2 option. Under this option, borrowers need only 3 percent of the value of the loan from their funds, with the remaining 2 percent from a gift, a grant, or an unsecured loan. In connection with qualifying ratios, Freddie Mac’s affordable program has no maximum housing expense ratio, and the total debt ratio is 38 to 40. Both GSEs require home buyer counseling for certain affordable products. A lender wishing to sell a loan to either of the GSEs must meet the GSEs’ underwriting standards. Those standards require credit enhancement—typically, mortgage insurance—for loans with LTV ratios greater than 80. The lender selects a mortgage insurer from those that are approved by the GSEs. Any of the loans with high LTV ratios that are part of the effort to reach low- and moderate-income borrowers and borrowers located in central cities and other underserved areas require mortgage insurance or other credit enhancement. However, as is the case with Fannie Mae’s portfolio in general, about two-thirds of the loans counted toward the social goals had LTV ratios of 80 percent or less and therefore generally would not require mortgage insurance. Specifically, Fannie Mae reports that for 1995, 65 percent of the dwelling units that counted toward the low- and moderate-income goal had LTV ratios of 80 percent or less. For the central city goal, the percentage was 64 percent, and for the special affordable goal, it was 64 percent. In comparison, about 65 percent of all mortgages acquired by Fannie Mae had LTV ratios of 80 percent or less. For Freddie Mac, the percentage of loans that counted toward the social goals varied; there were relatively more loans with LTVs below 80 percent and which therefore would not require mortgage insurance. Specifically, for Freddie Mac the percentage of units that counted toward the three goals in 1995 and had LTV ratios of 80 percent or less were 74, 67, and 79. The percentage of all mortgages acquired by Freddie Mac that had LTV ratios of 80 percent or below was 70 percent in 1995. In comparison with the total mortgages acquired, Fannie Mae had relatively more of its loans that counted toward its social goals with LTV ratios above 90 percent, while Freddie Mac had relatively fewer for two of the social goals. Overall, 17.9 percent of single-family mortgages purchased by Fannie Mae in 1995 had LTV ratios above 90 percent. For Freddie Mac, the number was 13.5 percent. While Fannie Mae purchased in 1995 few loans with LTV ratios above 95 percent, relatively more of these loans counted toward the social goals. Specifically, while 2.2 percent of the loans Fannie Mae purchased in 1995 had LTV ratios greater than 95 percent, the proportion of loans that counted toward the low- and moderate-income, central cities, and special affordable housing goals and for which the LTV of the loan was greater than 95 percent, were 5.4, 2.9, and 6.1 percent, respectively. Freddie Mac purchased nearly no loans with LTV ratios above 95 percent in 1995; none were counted toward the social goals. Of the programs used by the federal government to promote affordable homeownership, FHA’s Section 203(b) mortgage insurance program reaches more homeowners than does any other program; and in some instances, FHA’s insurance is used in conjunction with other programs. Where the use of mortgage insurance is known, two of the other programs used FHA mortgage insurance—in one instance for 60 percent of the loans made and in another instance for 19 percent of the buyers assisted. However, according to available data, FHA’s program in many instances is not as focused on low-income and minority homeowners and first-time home buyers as are the other nine programs. While the other programs are generally more targeted to these underserved borrowers, they often have a cost to the federal government. In contrast, the costs of FHA’s Section 203(b) program are paid by the program’s participants and not by the U.S. Treasury. In comparison with all of these programs, the requirements placed upon certain lenders and purchasers of mortgages may have the greatest potential for promoting affordable homeownership, although the extent to which these requirements affect lenders’ behavior is not clear. Finally, the most pervasive government incentive for homeownership—though not targeted to low-income home buyers—is the deduction of the interest on home mortgages from an individual’s taxable income. In response to comments from the Managing Director of the FHFB, we made a number of revisions, including clarifying our discussion of the potential federal government liability associated with advances provided by FHLB to member institutions. However, in contrast with the Managing Directors’ comments, we continue to believe that there is a potential federal cost associated with such advances because the federal government has paid for liquidating insolvent member institutions. Although the federal government has incurred no direct costs due to FHLB advances, the costs the federal government could incur for liquidating insolvent member institutions may be higher when member institutions have been provided additional resources for lending through the advances. In addition, government sponsorship of the FHLBank System creates potential liabilities for the federal government. For these reasons, we retained the discussion of this potential cost in our report.
Pursuant to a congressional request, GAO provided information on the Federal Housing Administration's (FHA) role in helping people to obtain home mortgages. GAO found that: (1) FHA and Department of Veterans Affairs (VA) programs allow borrowers to make smaller downpayments and accumulate higher total debt to income ratios than private mortgage insurers (PMI); (2) FHA programs finance closing costs as a part of the mortgage, insure loans up to $155,250, and provide full insurance coverage to lenders; (3) FHA insured 15 percent of the single-family housing market in 1994; (4) FHA insures low-income homebuyers with incomes no greater than 80 percent of the median income of the metropolitan statistical area; (5) FHA insures more home purchase mortgages than PMI or VA; (6) two-thirds of FHA approved loans would not have qualified for PMI; (7) the maximum loan amount for a FHA single-family home mortgage is the lesser of 95 percent of the median house price or 75 percent of the Federal Home Loan Mortgage Corporation's loan limit; (8) the federal government promotes affordable homeownership through several HUD and other Federal programs; (9) these programs require federal funds and assist homebuyers in combining their assistance with FHA mortgage insurance; and (10) FHA programs promote homeownership among home buyers that are typically underserved by other agencies and PMI.
Since the 1960s, the United States has used polar-orbiting and geostationary satellites to observe the earth and its land, ocean, atmosphere, and space environments. Polar-orbiting satellites constantly circle the earth in a nearly north-south orbit, providing global coverage of conditions that affect the weather and climate. As the earth rotates beneath it, each polar-orbiting satellite views the entire earth’s surface twice a day. In contrast, geostationary satellites maintain a fixed position relative to the earth from a high orbit of about 22,300 miles in space. Both types of satellites provide a valuable perspective of the environment and allow observations in areas that may be otherwise unreachable. Used in combination with ground, sea, and airborne observing systems, satellites have become an indispensable part of monitoring and forecasting weather and climate. For example, polar-orbiting satellites provide the data that go into numerical weather prediction models, which are a primary tool for forecasting weather days in advance—including forecasting the path and intensity of hurricanes. Geostationary satellites provide the graphical images used to identify current weather patterns and provide short-term warning. These weather products and models are used to predict the potential impact of severe weather so that communities and emergency managers can help prevent and mitigate its effects. Federal agencies are currently planning and executing major satellite acquisition programs to replace existing polar and geostationary satellite systems that are nearing the end of their expected life spans. However, these programs have troubled legacies of cost increases, missed milestones, technical problems, and management challenges that have resulted in reduced functionality and major delays to planned launch dates over time. We and others—including an independent review team reporting to the Department of Commerce and its Inspector General— have raised concerns that problems and delays on environmental satellite acquisition programs will result in gaps in the continuity of critical satellite data used in weather forecasts and warnings. According to officials at NOAA, a polar satellite data gap would result in less accurate and timely weather forecasts and warnings of extreme events, such as hurricanes, storm surge and floods. Such degradation in forecasts and warnings would place lives, property, and our nation’s critical infrastructures in danger. The importance of having such data available was highlighted in 2012 by the advance warnings of the path, timing, and intensity of Superstorm Sandy. Given the criticality of satellite data to weather forecasts, concerns that problems and delays on the new satellite acquisition programs will result in gaps in the continuity of critical satellite data, and the impact of such gaps on the health and safety of the U.S. population, we concluded that the potential gap in weather satellite data is a high-risk area and we added it to our High-Risk List in February 2013. For over forty years, the United States has operated two separate operational polar-orbiting meteorological satellite systems: the Polar- orbiting Operational Environmental Satellite series, which is managed by NOAA, and the Defense Meteorological Satellite Program, which is managed by the Air Force. Currently, there is one operational Polar- orbiting Operational Environmental Satellite and two operational Defense Meteorological Satellite Program satellites that are positioned so that they cross the equator in the early morning, midmorning, and early afternoon. In addition, the government relies on data from a European satellite, called the Meteorological Operational satellite. With the expectation that combining the Polar-orbiting Operational Environmental Satellite program and the Defense Meteorological Satellite Program would reduce duplication and result in sizable cost savings, a May 1994 Presidential Decision Directive required NOAA and the Department of Defense (DOD) to converge the two satellite programs into a single satellite program—the National Polar-orbiting Operational Environment Satellite System (NPOESS)—capable of satisfying both civilian and military requirements. To manage this program, DOD, NOAA, and the National Aeronautics and Space Administration (NASA) formed a tri-agency integrated program office. However, in the years after the program was initiated, NPOESS encountered significant technical challenges in sensor development, program cost growth, and schedule delays. Specifically, within 8 years of the contract’s award, program costs grew by over $8 billion, and launch schedules were delayed by over 5 years. In addition, as a result of a 2006 restructuring of the program, the agencies reduced the program’s functionality by decreasing the number of originally planned satellites, orbits, and instruments. Even after this restructuring, however, the program continued to encounter technical issues, management challenges, schedule delays, and further cost increases. Therefore, in August 2009, the Executive Office of the President formed a task force, led by the Office of Science and Technology Policy, to investigate the management and acquisition options that would improve the program. As a result of this review, the Director of Office of Science and Technology Policy announced in February 2010 that NOAA and DOD would no longer jointly procure NPOESS; instead, each agency would plan and acquire its own satellite system. Specifically, NOAA would be responsible for the afternoon orbit, and DOD would be responsible for the early morning orbit. The partnership with the European satellite agencies for the midmorning orbit would continue as planned. After the decision to disband NPOESS, DOD established its Defense Weather Satellite System program office and modified its contracts accordingly before deciding in early 2012 to terminate the program and reassess its requirements (as directed by Congress). program will be $11.3 billion through fiscal year 2025. The current anticipated launch date for the first JPSS satellite is March 2017, with a second satellite to be launched in December 2022. Over the last several years, we have issued a series of reports on the NPOESS program—and the transition to JPSS—that highlight the technical issues, cost growth, key management challenges, and key risks of transitioning from NPOESS to JPSS. In these reports, we made multiple recommendations to, among other things, improve executive- level oversight and establish mitigation plans for risks associated with pending polar satellite data gaps. NOAA has taken steps to address our recommendations, including taking action to improve executive-level oversight and in working to establish a contingency plan to mitigate potential gaps in polar satellite data. We subsequently assessed NOAA’s progress in implementing both of these recommendations in our reports being issued today. In addition to the polar-orbiting satellites, NOAA operates GOES as a two-satellite geostationary satellite system that is primarily focused on the United States. The GOES-R series is the next generation of satellites that NOAA is planning; the satellites are planned to replace existing weather satellites that will likely reach the end of their useful lives in about 2015. NOAA is responsible for GOES-R program funding and overall mission success. The NOAA Program Management Council, which is chaired by NOAA’s Deputy Undersecretary, is the program oversight body for the GOES-R program. However, since it relies on NASA’s acquisition experience and technical expertise to help ensure the success of its programs, NOAA implemented an integrated program management structure with NASA for the GOES-R program. Within the program office, there are two project offices that manage key components of the GOES-R system. NOAA has delegated responsibility to NASA to manage the Flight Project Office, including awarding and managing the spacecraft contract and delivering flight-ready instruments to the spacecraft. The Ground Project Office, managed by NOAA, oversees the Core Ground System contract and satellite data product development and distribution. NOAA has made a number of changes to the program since 2006, including the removal of certain satellite data products and a critical instrument (the Hyperspectral Environmental Suite). In February 2011, as part of its fiscal year 2012 budget request, NOAA requested funding to begin development for two additional satellites in the GOES-R series. The program estimates that the development for all four satellites in the GOES-R series is to cost $10.9 billion through 2036. In August 2013, NOAA announced that it would delay the launch of the GOES-R and S satellites from October 2015 and February 2017 to the second quarter of fiscal year 2016 and the third quarter of fiscal year 2017, respectively. These are the current anticipated launch dates of the first two GOES-R satellites; the last satellite in the series is planned for launch in 2024. In September 2010, we recommended that NOAA develop and document continuity plans for the operation of geostationary satellites that include the implementation procedures, resources, staff roles, and time tables needed to transition to a single satellite, a foreign satellite, or other solution. In September 2011, the GOES-R program provided a draft plan documenting a strategy for conducting operations if there were only a single operational satellite. In June 2012, we reported that, in order to oversee GOES-R contingency funding, senior managers at NOAA should have greater insight into the amount of contingency reserves set aside for each satellite in the program and detailed information on how reserves are being used on both the flight and ground components. We recommended that the program assess and report to the NOAA Program Management Council the reserves needed for completing remaining development for each satellite in the series. We also found that unresolved schedule deficiencies remain in portions of the program’s integrated master schedule, including subordinate schedules for the spacecraft and core We recommended that the program address shortfalls ground system.in schedule management practices, and NOAA has since taken steps to improve these practices. We subsequently assessed NOAA’s progress in implementing both of these recommendations in our reports being issued today. NOAA has made progress towards JPSS program objectives of sustaining the continuity of NOAA’s polar-orbiting satellite capabilities through the S-NPP, JPSS-1, and JPSS-2 satellites by (1) delivering S-NPP data to weather forecasters and (2) completing significant instrument and spacecraft development for the JPSS-1 satellite. However, the program has experienced delays on the ground system schedules for the JPSS-1 satellite. Moreover, the program is revising its scope and objectives to reduce costs and prioritize NOAA’s weather mission. The JPSS program has made progress on S-NPP since its launch. For example, in November 2012 the office completed an interim backup command and control facility that could protect the health and safety of the satellite if unexpected issues occurred at the primary mission operations facility. Also, since completing satellite activation and commissioning activities in March 2012, the JPSS program has been working to calibrate and validate S-NPP products in order to make them precise enough for use in weather-related operations by October 2013. While the program office plans to have 18 products validated for operational use by the end of September 2013, it is behind schedule for other products. Specifically, the program expects to complete validating 35 S-NPP products by the end of September 2014 and one other product by the end of September 2015, almost one and two years later than originally planned. In order to sustain polar-orbiting earth observation capabilities beyond S-NPP, the program is working to complete development of the JPSS-1 systems in preparation for a March 2017 launch date. To manage this initiative, the program office organized its responsibilities into two separate projects: (1) the flight project, which includes sensors, spacecraft, and launch vehicles and (2) the ground project, which includes ground-based data processing and command and control systems. JPSS projects and components are at various stages of system development. The flight project has nearly completed instrument hardware development for the JPSS-1 satellite and has begun testing certain instruments. Key testing milestones and delivery dates for the instruments and spacecraft have generally held constant since the last key decision point in July 2012, and both the instruments and the spacecraft are generally meeting expected technical performance. All instruments are scheduled to be delivered to the spacecraft by 2014. Also, the flight project completed a major design review for the JPSS-1 satellite’s spacecraft. The JPSS ground project has also made progress in developing the ground system components. However, the ground project experienced delays in its planned schedule due to issues with the availability of facilities required for hardware installation, software development, and testing. Consequently, the program has replanned the ground project schedule and is merging the next two major software releases. As a result, any complications in the merged ground system upgrades could affect the system’s readiness to support the JPSS-1 launch date. While NOAA is moving forward to complete product development on the S-NPP satellite and system development on the JPSS-1 satellite, the agency recently made major revisions to the program’s scope and planned capabilities and is moving to implement other scope changes as it finalizes its plans pending coordination with congressional committees. We previously reported that, as part of its fiscal year 2013 budget process, NOAA was considering removing selected elements of the program in order to reduce total program costs from $14.6 billion to $12.9 billion. By October 2012, NOAA had reduced the program’s scope by, among other things, reducing the previously planned network of fifteen ground-based receptor stations to two receptor sites at the north pole and two sites at the south pole and increasing the time it takes to obtain satellite data and deliver it to the end user on JPSS-2 from 30 minutes to 80 minutes. More recently, as proposed by the administration, NOAA began implementing additional changes in the program’s scope and objectives in order to meet the agency’s highest-priority needs for weather forecasting and reduce program costs from $12.9 billion to $11.3 billion. In this latest round of revisions, NOAA revised the program’s scope by, among other things, transferring requirements for certain climate sensors to NASA, creating a new Polar Free Flyer program within NOAA that would be responsible for missions supporting continued solar measurements and user service systems, and reducing the JPSS program’s mission life cycle by 3 years—from 2028 to 2025. The changes NOAA implemented over the last 2 years will have an impact on those who rely on polar satellite data. Specifically, satellite data products will be delivered more slowly than anticipated because of the reduction in the number of ground stations, and military users may not obtain the variety of products once anticipated at the rates anticipated because of the removal of their ground-based processing subsystems. As NOAA moves to implement these program changes, it will be important to assess and understand the impact the changes will have on satellite data users. According to our draft guidance on best practices in scheduling,success of a program depends, in part, on having an integrated and reliable master schedule that defines when and how long work will occur and how each activity is related to the others. The JPSS program office provided a preliminary integrated master schedule in June 2013, but this schedule is incomplete. The schedule contains the scope of work for key program components, such as the JPSS-1 and JPSS-2 satellites and the ground system, and cites linkages to more detailed component schedules. However, significant weaknesses exist in the program’s schedule. Specifically, about one-third of the schedule is missing logical relationships called dependencies that are needed to depict the sequence the in which activities occur. Complete network logic between all activities is essential if the schedule is to correctly forecast the start and end dates of activities within the plan. Program documentation acknowledges that this schedule is not yet complete and the program office plans to refine it over time. Until the program office completes its integrated schedule and includes logically linked sequences of activities, it will lack the information it needs to effectively monitor development progress, manage dependencies, and forecast the JPSS-1 satellite’s completion and launch. While the program plans to refine its integrated master schedule, three component schedules supporting the JPSS-1 mission—VIIRS, the spacecraft, and the ground system—varied in their implementation of characteristics of high-quality, reliable schedules. Each schedule had strengths and weaknesses with respect to sound scheduling practices, but VIIRS was a stronger schedule with fewer weaknesses compared to the ground system and spacecraft schedules. The following table identifies the quality of each of the selected JPSS-1 component schedules based on the extent to which they met ten best practices of high-quality and reliable schedules. The inconsistency in quality among the three schedules has multiple causes, including the lack of documented explanations for certain practices and schedule management and reporting requirements that varied across contractors. Since the reliability of an integrated schedule depends in part on the reliability of its subordinate schedules, schedule quality weaknesses in these schedules will transfer to an integrated master schedule derived from them. Consequently, the extent to which there are quality weaknesses in JPSS-1 support schedules further constrains the program’s ability to monitor progress, manage key dependencies, and forecast completion dates. Until the program office addresses the scheduling shortfalls in its component schedules, it will lack the information it needs to effectively monitor development progress, manage dependencies, and forecast the JPSS-1 satellite’s completion and launch. The JPSS program office used data from flight project component schedules as inputs when it recently conducted a schedule risk analysis on the JPSS-1 mission schedule (and launch date) through NASA’s joint cost and schedule confidence level (JCL) process. The JCL implemented by the JPSS program office represents a best practice in schedule management for establishing a credible schedule and reflects a robust schedule risk analysis conducted on key JPSS-1 schedule components. Based on the results of the JCL, the program office reports that its level of confidence in the JPSS-1 schedule is 70 percent and that it has sufficient schedule reserve to maintain a launch date of no later than March 2017. However, the program office’s level of confidence in the JPSS-1 schedule may be overly optimistic for two key reasons. First, the model that the program office used was based on flight project activities rather than an integrated schedule consisting of flight, ground, program office, and other activities relevant to the development and launch of JPSS-1. As a result, the JPSS program office’s confidence level projections do not factor in the ongoing scheduling issues that are impacting the ground project. Second, there are concerns regarding the spacecraft schedule’s quality as identified above. Factoring in these concerns, the confidence of the JPSS-1 satellite’s schedule and projected launch date would be lower. Until the program office conducts a schedule risk analysis on an integrated schedule that includes the entire scope of effort and addresses quality shortfalls of relevant component schedules, it will have less assurance of meeting the planned March 2017 launch date for JPSS-1. In recent years, NOAA officials have communicated publicly and often about the risk of a polar satellite data gap. Currently, the program estimates that there will be a gap of about a year and a half from the time when the current Suomi NPP satellite reaches the end of its expected lifespan and when the JPSS-1 satellite will be in orbit and operational. Satellite data gaps in the morning or afternoon polar orbits would lead to less accurate and timely weather forecasting; as a result, advanced warning of extreme events—such as hurricanes, storm surges, and floods—would be affected. See figure 1 for a depiction of a potential gap in the afternoon orbit lasting 17 months. Government and industry best practices call for the development of contingency plans to maintain an organization’s essential functions in the case of an adverse event and to reduce or control negative impacts from such risks. In October 2012, in response to our earlier recommendations to establish mitigation plans,mitigation plan to address the impact of potential gaps in polar afternoon NOAA established a satellite data. This plan identifies alternatives for mitigating the risk of a 14- to 18-month gap in the afternoon orbit beginning in March 2016, between the current polar satellite and the JPSS-1 satellite. However, NOAA did not implement the actions identified in its mitigation plan and decided to identify additional alternatives. In October 2012, at the direction of the Under Secretary of Commerce for Oceans and Atmosphere (who is also the Administrator of NOAA), NOAA contracted for a detailed technical assessment of alternatives to mitigate the degradation of products caused by a gap in satellite data in the afternoon polar orbit. This assessment solicited input from experts within and outside of NOAA and resulted in a range of alternatives that included relying on existing polar satellites, making improvements to the forecast models, and relying on the use of a foreign satellite. By documenting its mitigation plan and conducting a study on additional alternatives, NOAA has taken positive steps towards establishing a contingency plan for handling the potential impact of satellite data gaps in the afternoon polar orbit. However, NOAA does not yet have a comprehensive contingency plan because it has not yet selected the strategies to be implemented or established procedures and actions to implement the selected strategies. In addition, there are shortfalls in the agency’s current plans as compared to government and industry best practices, such as not always identifying specific actions with defined roles and responsibilities, timelines, and triggers. Moreover, multiple steps remain in testing, validating, and implementing the contingency plan. NOAA officials stated that the agency is continuing to work on refinements to its gap mitigation plan, and that they anticipate issuing an updated plan in fall 2013 that will reflect the additional alternatives. While NOAA expects to update its plan, the agency does not yet have a schedule for adding key elements—such as specific actions, roles and responsibilities, timelines, and triggers—for each alternative. Until NOAA establishes a comprehensive contingency plan that integrates its strategies and addresses the elements identified above to improve its plans, it may not be sufficiently prepared to mitigate potential gaps in polar satellite coverage. The GOES-R program has completed its design and made progress in building flight and ground components. Specifically, the program completed critical design reviews for the flight and ground projects and for the overall program between April and November 2012. The GOES-R flight components are in various stages leading up to the system integration review, with five of six completing a key environmental testing review. In addition, the program began building the spacecraft in February 2013. On the GOES-R core ground system, a prototype for the operations module was delivered in late 2012 and is now being used for initial testing and training. The program has also installed antenna dishes at NOAA’s primary satellite communications site, and completed two key reviews of antennas at the GOES remote backup site. After the completion of design, and as the spacecraft and instruments are developed, NASA plans to conduct several interim reviews and tests before proceeding to the next major program-level review, the system integration review. However, the program has delayed several key milestones. Over the past 12 to 18 months, both the flight and ground segments experienced delays in planned dates for programwide milestones. More recently, in August 2013, the program announced that it would delay the launch of the first two satellites in the program. Specifically, the launch of the GOES-R satellite would be delayed from October 2015 to the quarter ending March 2016, and that the expected GOES-S satellite launch date would be delayed from February 2017 to the quarter ending June 2017. The GOES-R program is also experiencing technical issues on the flight and ground projects that could cause further schedule delays. For example, the electronics unit of the Geostationary Lightning Mapper flight instrument experienced problems during testing, which led the program office to delay the tests. The program is considering several options to address this issue, including using the electronics unit being developed for a later GOES-R satellite to allow key components to proceed with testing. If the issue cannot be resolved, it would affect the instrument’s performance. As a result, the program is also considering excluding the Geostationary Lightning Mapper from the first GOES-R satellite. It plans to make its decision on whether or not to include the instrument in late 2013. The removal of this instrument would cause a significant reduction in the satellite’s functionality. The program has reported that it is on track to stay within its $10.9 billion life cycle cost estimate. However, program officials reported that, while the program is currently operating without cost overruns on any of its main components, program life cycle costs may increase by $150 to $300 million if full funding in the current fiscal year is not received. While some improvements have been made, the GOES-R program continues to demonstrate weaknesses in the development of component schedules, which have the potential to cause further delays in meeting milestone timelines. In the time since our previous work on examining program schedules in June 2012, it has since improved selected practices on its spacecraft and core ground schedules. For example, NOAA has since included all subcontractor activities in the core ground schedule, and allocated a higher percentage of activities to resources in its schedules. As a result of these improvements, the program has increased the reliability of its schedules, and also decreased the risk of further delaying satellite launch dates due to incorrect schedule data. However, the program’s performance on other scheduling best practices stayed the same or worsened. For example, both the spacecraft and core ground schedules have issues with sequencing remaining activities and integration between activities. Without the right linkages, activities that slip early in the schedule do not transmit delays to activities that should depend on them. Both schedules also have a very high average of total float time for detailed activities. Such high values of total float time can falsely depict true project status, making it difficult to determine which activities drive key milestone dates. Finally, the project’s critical path does not match up with activities that make up the driving path on the core ground schedule. Without a valid critical path to the end of the schedule, management cannot focus on activities that will have a detrimental effect on the key project milestones and deliveries if they slip. Taken together, delays in key milestones, technical issues, and weaknesses in schedule practices could lead to further delays in the launch date of the first GOES-R satellite, currently planned to occur by March 2016. Launch delays such as the one recently experienced by the GOES-R program also increase the time that NOAA is without an on-orbit backup satellite. This is significant because, in April 2015, NOAA expects to retire one of its operational satellites and move its back-up satellite into operations. The recent delay in expected launch of the first GOES-R satellite from October 2015 to as late as March 2016 increases the projected gap in backup coverage to just short of two years. Also, the first satellite is now expected to complete its post-launch testing by September 2016, only five months before NOAA expects to retire the GOES-15 satellite. If launch of the first satellite were to have a further slip of more than five months, a gap in satellite coverage could occur. Figure 2 shows current anticipated operational and test periods for the two most recent series of GOES satellites. Because of the expected imminent use of the current on-orbit back-up satellite, a launch delay to GOES-R would also increase the potential for a gap in GOES satellite coverage should one of the two operational satellites (GOES-14 or -15) fail prematurely (see graphic)—a scenario given a 36 percent likelihood of occurring by an independent review team. Without a full complement of operational GOES satellites, the nation’s ability to maintain the continuity of data required for effective weather forecasting could be compromised. This, in turn, could put the public, property, and the economy at risk. The impact of a gap in satellite coverage may also increase based on issues with NOAA’s current contingency plans. Government and industry best practices call for the development of contingency plans to maintain an organization’s essential functions in the case of an adverse event. These practices include key elements such as identifying and selecting strategies to address failure scenarios, developing procedures to implement selected strategies, and involving affected stakeholders. NOAA has established contingency plans for the loss of its GOES satellites and ground systems that are generally in accordance with best practices. Specifically, NOAA identified failure scenarios, recovery priorities, and minimum levels of acceptable performance. NOAA provided a final version of its satellite plan in December 2012 that included scenarios for three, two, and one operational satellites. It also established contingency plans that identify solutions and high-level activities and triggers to implement the solutions. However, these plans are missing key elements. For example, NOAA has not demonstrated that the contingency strategies for both its satellite and ground systems are based on an assessment of costs, benefits, and impact on users. Furthermore, NOAA did not work with the user community to address potential reductions in capabilities under contingency scenarios or identify alternative solutions for preventing a delay in the GOES-R launch date. In addition, while NOAA’s failure scenarios for its satellite system are based on the number of available satellites—and the loss of a backup satellite caused by a delayed GOES- R launch would fit into these scenarios—the agency did not identify alternative solutions or time lines for preventing a GOES-R launch delay. Until NOAA addresses the shortfalls in its contingency plans and procedures, the plans may not work as intended in an emergency and satellite data users may not obtain the information they need to perform their missions. Both the JPSS and GOES-R programs continue to carry risks of future launch delays and potential gaps in satellite coverage; implementing the recommendations in our accompanying reports should help mitigate those risks. In the JPSS report being released today, we recommend, among other things, that NOAA establish a complete JPSS program integrated master schedule that includes a logically linked sequence of activities; address the shortfalls in the ground system and spacecraft component schedules outlined in our report; after completing the integrated master schedule and addressing shortfalls in component schedules, update the joint cost and schedule confidence level for JPSS-1, if warranted and justified; and establish a comprehensive contingency plan for potential satellite data gaps in the polar orbit that is consistent with contingency planning best practices identified in our report. The plan should include, for example, specific contingency actions with defined roles and responsibilities, timelines, and triggers; analysis of the impact of lost data from the morning orbits; and identification of opportunities to accelerate the calibration and validation phase of JPSS-1. In the GOES-R report being released today, we recommend, among other things, that NOAA given the likely gap in availability of an on-orbit GOES backup satellite in 2015 and 2016, address the weaknesses identified in our report on the core ground system and the spacecraft schedules. These weaknesses include, but are not limited to, sequencing all activities, ensuring there are adequate resources for the activities, and conducting a schedule risk analysis and revise the satellite and ground system contingency plans to address weaknesses identified in our report, including providing more information on the potential impact of a satellite failure, identifying alternative solutions for preventing a delay in GOES-R launch as well as time lines for implementing those solutions, and coordinating with key external stakeholders on contingency strategies. On both reports, NOAA agreed with our recommendations and identified steps it is taking to implement them. In summary, NOAA has made progress on both the JPSS and GOES-R programs, but key challenges remain to ensure that potential gaps in satellite data are minimized or mitigated. On the JPSS program, NOAA has made noteworthy progress in using S-NPP data in weather forecasts and developing the JPSS-1 satellite. However, NOAA does not expect to validate key S-NPP products until nearly 3 years after the satellite’s launch, and there are remaining issues with the JPSS schedule that decrease the confidence that JPSS-1 will launch by March 2017 as planned. On the GOES-R program, progress in completing the system’s design has been accompanied by continuing milestone delays, including delays in the launch dates for both the GOES-R and GOES-S satellites. The potential for further milestone delays also exists due to remaining weaknesses in developing and maintaining key program schedules. Faced with an anticipated gap in the polar satellite program and a potential gap in the geostationary satellite program, NOAA has taken steps to study alternatives and establish mitigation plans. However, the agency does not yet have comprehensive contingency plans that identify specific actions with defined timelines, and triggers. Until NOAA establishes comprehensive contingency plans that addresses these shortfalls, its plans for mitigating potential gaps may not be effective in avoiding significant impacts to its weather mission. Chairman Broun, Chairman Stewart, Ranking Member Maffei, Ranking Member Bonamici, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you have any questions on matters discussed in this testimony, please contact David A. Powner at (202) 512-9286 or at pownerd@gao.gov. Other key contributors include Colleen Phillips (assistant director), Shaun Byrnes, Lynn Espedido, Nancy Glover, Franklin Jackson, Joshua Leiling, and Meredith Raymond. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As requested, this statement summarizes two reports being released today on (1) the JPSS program's status and plans, schedule quality, and gap mitigation strategies, and (2) the GOES-R series program's status, requirements management, and contingency planning. Since the 1960s, the United States has used polar-orbiting and geostationary satellites to observe the earth and its land, ocean, atmosphere, and space environments. Polar-orbiting satellites constantly circle the earth in a nearly north-south orbit, providing global coverage of conditions that affect the weather and climate. As the earth rotates beneath it, each polar-orbiting satellite views the entire earth's surface twice a day. In contrast, geostationary satellites maintain a fixed position relative to the earth from a high orbit of about 22,300 miles in space. Both types of satellites provide a valuable perspective of the environment and allow observations in areas that may be otherwise unreachable. Used in combination with ground, sea, and airborne observing systems, satellites have become an indispensable part of monitoring and forecasting weather and climate. For example, polar-orbiting satellites provide the data that go into numerical weather prediction models, which are a primary tool for forecasting weather days in advance--including forecasting the path and intensity of hurricanes. Geostationary satellites provide the graphical images used to identify current weather patterns and provide short-term warning. These weather products and models are used to predict the potential impact of severe weather so that communities and emergency managers can help prevent and mitigate its effects. National Oceanic and Atmospheric Administration (NOAA) has made progress on both the Joint Polar Satellite System (JPSS) and Geostationary Operational Environment Satellite-R series (GOES-R) programs, but key challenges remain to ensure that potential gaps in satellite data are minimized or mitigated. On the JPSS program, NOAA has made noteworthy progress in using Suomi National Polar-orbiting Partnership (S-NPP) data in weather forecasts and developing the JPSS-1 satellite. However, NOAA does not expect to validate key S-NPP products until nearly 3 years after the satellite's launch, and there are remaining issues with the JPSS schedule that decrease the confidence that JPSS-1 will launch by March 2017 as planned. On the GOES-R program, progress in completing the system's design has been accompanied by continuing milestone delays, including delays in the launch dates for both the GOES-R and GOES-S satellites. The potential for further milestone delays also exists due to remaining weaknesses in developing and maintaining key program schedules. Faced with an anticipated gap in the polar satellite program and a potential gap in the geostationary satellite program, NOAA has taken steps to study alternatives and establish mitigation plans. However, the agency does not yet have comprehensive contingency plans that identify specific actions with defined timelines, and triggers. Until NOAA establishes comprehensive contingency plans that addresses these shortfalls, its plans for mitigating potential gaps may not be effective in avoiding significant impacts to its weather mission.
Among mandatory spending programs—and indeed tax expenditures—the health area is especially important because the long-term fiscal challenge is largely a health care challenge. Contrary to public perceptions, health care is the biggest driver of the long-term fiscal challenge. While Social Security is important because of its size, health care spending is both large and projected to grow much more rapidly. Our most recent simulation results illustrate the importance of health care in the long-term fiscal outlook as well as the imperative to take action. Simply put, our nation’s fiscal policy is on an imprudent and unsustainable course. These long-term budget simulations show, as do those published last December by the Congressional Budget Office (CBO), that over the long term we face a large and growing structural deficit due primarily to known demographic trends and rising health care costs and lower federal revenues as a percentage of the economy. Continuing on this unsustainable fiscal path will gradually erode, if not suddenly damage, our economy, our standard of living, and ultimately our national security. Our current path also will increasingly constrain our ability to address emerging and unexpected budgetary needs and increase the burdens that will be faced by future generations. Figures 3 and 4 present our long-term simulations under two different sets of assumptions. In figure 3, we start with CBO’s 10-year baseline— constructed according to the statutory requirements for that baseline. Consistent with these requirements, discretionary spending is assumed to grow with inflation for the first 10 years and tax cuts scheduled to expire are assumed to expire. After 2016, discretionary spending is assumed to grow with the economy, and revenue is held constant as a share of GDP at the 2016 level. In figure 4, two assumptions are changed: (1) discretionary spending is assumed to grow with the economy after 2006 rather than merely with inflation, and (2) all expiring tax provisions are extended. For both simulations, Social Security and Medicare spending is based on the 2005 Trustees’ intermediate projections, and we assume that benefits continue to be paid in full after the trust funds are exhausted. Medicaid spending is based on CBO’s December 2005 long-term projections under mid-range assumptions. As these simulations illustrate, absent significant policy changes on the spending and/or revenue side of the budget, the growth in mandatory spending on federal retirement and especially health entitlements will encumber an escalating share of the government’s resources. Indeed, when we assume that all the temporary tax reductions are made permanent and discretionary spending keeps pace with the economy, our long-term simulations suggest that by 2040 federal revenues may be adequate to pay only some Social Security benefits and interest on the federal debt. Neither slowing the growth in discretionary spending nor allowing the tax provisions to expire—nor both together—would eliminate the imbalance. Although revenues will be part of the debate about our fiscal future, assuming no changes to Social Security, Medicare, Medicaid, and other drivers of the long-term fiscal gap would require at least a doubling of taxes—and that seems highly implausible. Economic growth is essential, but we will not be able to simply grow our way out of the problem. The numbers speak loudly: our projected fiscal gap is simply too great. Closing the current long-term fiscal gap would require sustained economic growth far beyond that experienced in U.S. economic history since World War II. Tough choices are inevitable, and the sooner we act the better. Accordingly, substantive reform of the major health programs and Social Security is critical to recapturing our future fiscal flexibility. Ultimately, the nation will have to decide what level of federal benefits and spending it wants and how it will pay for these benefits. Our current fiscal path will increasingly constrain our ability to address emerging and unexpected budgetary needs and increase the burdens that will be faced by future generations. Continuing on this path will mean escalating and ultimately unsustainable federal deficits and debt that will serve to threaten our future national security as well as the standard of living for the American people. The aging population and rising health care spending will have significant implications not only for the budget, but also the economy as a whole. Figure 5 shows the total future draw on the economy represented by Social Security, Medicare, and Medicaid. Under the 2005 Trustees’ intermediate estimates and CBO’s 2005 long-term Medicaid estimates under mid-range assumptions, spending for these entitlement programs combined will grow to 15.7 percent of gross domestic product (GDP) in 2030 from today’s 8.4 percent. It is clear that, taken together, Social Security, Medicare, and Medicaid represent an unsustainable burden on future generations. Furthermore, most of the long-term growth is in health care. While Social Security in its current form will grow from 4.3 percent of GDP today to 6.4 percent in 2080, Medicare’s burden on the economy will quintuple—from 2.7 percent to 13.8 percent of the economy—and these projections assume a growth rate for Medicare spending that is below historical experience! As figure 5 shows, unlike Social Security which grows larger as a share of the economy and then levels off, within this projection period we do not see Medicare growth abating. Whether or not the President’s Budget proposals on Medicare are adopted, they should serve to raise public awareness of the importance of health care costs to both today’s budget and tomorrow’s. This could serve to jump start a discussion about appropriate ways to control the major driver of our long-term fiscal outlook—health care spending. As noted, unlike Social Security, Medicare spending growth rates reflect not only a burgeoning beneficiary population, but also the escalation of health care costs at rates well exceeding general rates of inflation. The growth of medical technology has contributed to increases in the number and quality of health care services. Moreover, the actual costs of health care consumption are not transparent. Consumers are largely insulated by third-party payers from the cost of health care decisions. The health care spending problem is particularly vexing for the federal budget, affecting not only Medicare and Medicaid but also other important federal health programs, such as for our military personnel and veterans. For example, Department of Defense health care spending rose from about $12 billion in 1990 to about $30.4 billion in 2004—in part, to meet additional demand resulting from program eligibility expansions for military retirees, reservists, and the dependents of those two groups and for the increased needs of active duty personnel involved in conflicts in Iraq, Bosnia, and Afghanistan. Expenditures by the Department of Veterans Affairs have also grown—from about $12 billion in 1990 to about $26.8 billion in 2004—as an increasing number of veterans look to federal programs to supply their health care needs. The challenge to rein in health care spending is not limited to public payers, however, as the phenomenon of rising health care costs associated with new technology exists system-wide. This means that addressing the unsustainability of health care costs is also a major competitiveness and societal challenge that calls for us as a nation to fundamentally rethink how we define, deliver, and finance health care in both the public and the private sectors. A major difficulty is that our current system does little to encourage informed discussions and decisions about the costs and value of various health care services. These decisions are very important when it comes to cutting-edge drugs and medical technologies, which can be incredibly expensive but only marginally better than other alternatives. As a nation, we are going to need to weigh unlimited individual wants against broader societal needs and decide how responsibility for financing health care should be divided among employers, individuals, and government. Ultimately, we may need to define a set of basic and essential health care services to which every American is ensured access. Individuals wanting additional services, and insurance coverage to pay for them, might be required to allocate their own resources. Clearly, such a dramatic change would require a long transition period—all the more reason to act sooner rather than later. In recent years, policy analysts have discussed a number of incremental reforms that take aim at moderating health care spending, in part by unmasking health care’s true costs. (See fig. 6 for a list of selected reforms.) Among these reforms is to devise additional cost-sharing provisions to make health care costs more transparent to patients. Currently, many insured individuals pay relatively little out of pocket for care at the point of delivery because of comprehensive health care coverage—precluding the opportunity to sensitize these patients to the cost of their care. Develop a set of national practice standards to help avoid unnecessary care, improve outcomes, and reduce litigation. Encourage case management approaches for people with expensive acute and chronic conditions to improve the quality and efficiency of care delivered and avoid inappropriate care. Foster the use of information technology to increase consistency, transparency, and accountability in health care. Emphasize prevention and wellness care, including nutrition. Leverage the government’s purchasing power to control costs for prescription drugs and other health care services. Revise certain federal tax preferences for health care to encourage the more efficient use of appropriate care. Create an insurance market that adequately pools risk and offers alternative levels of coverage. Develop a core set of basic and essential services with supplemental coverage being available as an option but at a cost. Use the Federal Employees Health Benefits Program (FEHBP) model as a possible means to experiment and see the way forward. Limit spending growth for government-sponsored health care programs (e.g., percentage of the budget and/or the economy). Other steps include reforming the policies that give tax preferences to insured individuals and their employers. These policies permit the value of employees’ health insurance premiums to be excluded from the calculation of their taxable earnings and exclude the value of the premium from the employers’ calculation of payroll taxes for both themselves and employees. Tax preferences also exist for health savings accounts and other consumer-directed plans. These tax exclusions represent a significant source of forgone federal revenue and work at cross-purposes to the goal of moderating health care spending. As figure 7 shows, in 2005 the tax expenditure responsible for the greatest revenue loss was that for the exclusion of employer contributions for employees’ insurance premiums and medical care. Another area conducive to incremental change involves provider payment reforms. These reforms are intended to induce physicians, hospitals, and other health care providers to improve on quality and efficiency. For example, studies of Medicare patients in different geographic areas have found that despite receiving a greater volume of care, patients in higher use areas did not have better health outcomes or experience greater satisfaction with care than those living in lower use areas. Public and private payers are experimenting with payment reforms designed to foster the delivery of care that is proven to be both clinically and cost effective. Ideally, identifying and rewarding efficient providers and encouraging inefficient providers to emulate best practices will result in better value for the dollars spent on care. The development of uniform standards of practice could lead to ensuring that people with chronic illnesses, a small but expensive population, received more and cost-effective and patient- centered care while reducing unwarranted medical malpractice litigation. The problem of escalating health care costs is complex because addressing federal programs such as Medicare and the federal-state Medicaid program will need to involve change in the health care system of which they are a part—not just within federal programs. This will be a major societal challenge that will affect all age groups. Because our health care system is complex, with multiple interrelated pieces, solutions to health care cost growth are likely to be incremental and require a number of extensive efforts over many years. In my view, taking steps to address the health care cost dilemma system-wide puts us on the right path for correcting the long-term fiscal problems posed by the nation’s health care entitlements. I have focused today on health care because it is a driver of our fiscal outlook. Indeed, health care is already putting a squeeze on the federal budget. Health care is the dominant but not the only driver of our long-term fiscal challenge. Today it is hard to think of our fiscal imbalances as a big problem: the economy is healthy and interest rates seem low. We, however, have an obligation to look beyond today. Budgets, deficits, and long-term fiscal and economic outlooks are not just about numbers: they are also about values. It is time for all of us to recognize our stewardship obligation for the future. We should act sooner rather than later. We all must make choices that may be difficult and unpleasant today to avoid passing an even greater burden on to future generations. Let us not be the generation who sent the bill for its consumption to its children and grandchildren. Thank you Mr. Chairman, Mr. Spratt, and members of the Committee for having me today. We at GAO, of course, stand ready to assist you and your colleagues as you tackle these important challenges. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses entitlement and other mandatory spending programs in light of our nation's long-term fiscal outlook and the challenges it poses for the budget and oversight processes. In our report entitled 21st Century Challenges: Reexamining the Base of the Federal Government, we presented illustrative questions for policy makers to consider as they carry out their responsibilities. These questions look across major areas of the budget and federal operations including discretionary and mandatory spending, and tax policies and programs. We hope that this report, among other things, will be used by various congressional committees as they consider which areas of government need particular attention and reconsideration. Congress will also receive more specific proposals, some of them will be presented within comprehensive agendas. Our report provides examples of the kinds of difficult choices the nation faces with regard to discretionary spending; mandatory spending, including entitlements; as well as tax policies and compliance activities. Mandatory spending programs--like tax expenditures--are governed by eligibility rules and benefit formulas, which means that funds are spend as required to provide benefits to those who are eligible and wish to participate. Since Congress and the President must change substantive law to change the cost of these programs, they are relatively uncontrollable on an annual basis. Moreover, as we reported in a 1994 analysis, their cost cannot be controlled by the same "spending cap" mechanism used for discretionary spending. By their very nature mandatories limit budget flexibility. Mandatory spending has grown as a share of the total federal budget. Under both the Congressional Budget Office baseline estimates and the President's Budget, this spending would grow further. While the long-term fiscal outlook is driven by Medicare, Medicaid and Social Security, it does not mean that all other mandatory programs should be "given a pass." As we have noted elsewhere, reexamination of the "fit" between government programs and the needs and priorities of the nation should be an accepted practice. So in terms of budget flexibility--the freedom of each Congress and President to allocate public resources--we cannot ignore mandatory spending programs even if they do not drive the aggregate. While some might suggest that mandatory programs could be controlled by being converted to discretionary or annually appropriated programs, that seems unlikely to happen. If we look across the range of mandatories we see many programs have objectives and missions that contribute to the achievement of a range of broad-based and important public policy goals such as providing a floor of income security in retirement, fighting hunger, fostering higher education, and providing access to affordable health care. To these ends, these programs--and tax expenditures--were designed to provide benefits automatically to those who take the desired action or meet the specified eligibility criteria without subjecting them to an annual decision regarding spending or delay in the provision of benefits such a process might entail. Although mandatory spending is not amenable to "caps," that does not mean that mandatory programs should be permitted to be on autopilot and grow to an unlimited extent. Since the spending for any given entitlement or other mandatory program is a function of the interaction between the eligibility rules and the benefit formula--either or both of which may incorporate exogenous factors such as economic downturns--the way to change the path of spending for any of these programs is to change those rules or formulas. We recently issued a report on "triggers"--some measure which, when reached or exceeded, would prompt a response connected to that program. By identifying significant increases in the spending path of a mandatory program relatively early and acting to constrain it, Congress may avert much larger and potentially disruptive financial challenges and program changes in the future.
When USDA was established in 1862, more than half of the American workforce was engaged in farming. The Department’s objectives, as outlined by the first Commissioner of Agriculture, were to (1) collect, arrange, and publish statistical and other useful agricultural information; (2) introduce valuable plants and animals; (3) answer farmers’ inquiries on agriculture; (4) test agricultural implements; (5) conduct chemical analyses of soils, grains, fruits, plants, vegetables, and manures; (6) establish a professorship of botany and entomology; and (7) establish an agricultural library and museum. Since then, new needs and problems have caused USDA’s responsibilities to expand greatly. USDA’s current departmental mission is to enhance the quality of life for the American people by (1) supporting production agriculture; (2) ensuring a safe, affordable, nutritious, and accessible food supply; (3) caring for agricultural, forest, and range lands; (4) supporting the sound development of rural communities; (5) providing economic opportunities for farm and rural residents; (6) expanding global markets for U.S. agricultural and forest products and services; and (7) working to reduce hunger in America and throughout the world. To accomplish this overall mission, USDA has organized its agencies into seven mission areas: Farm and Foreign Agricultural Services; Food, Nutrition, and Consumer Services; Marketing and Regulatory Programs; Food Safety; Natural Resources and Environment; Research, Education, and Economics; and Rural Development. Appendixes I through VII describe USDA’s seven mission areas in more detail, including a description of each agency’s mission and activities, similarities to other federal agencies, and prior GAO reports discussing these similarities. Figure 1 shows, by mission area, how USDA funds were obligated in fiscal year 1997. Concerning organizational structure, we have reported that the number and diversity of USDA’s responsibilities create fundamental management problems for the Department. These include difficulties in the following areas: establishing a meaningful set of overarching departmentwide objectives because several of USDA’s current responsibilities are not related to one another or may conflict; managing a conglomerate of many independent agencies and offices; and effectively carrying out responsibilities, such as those in the food safety and food assistance areas, that are part of broader federal efforts shared among several federal agencies. We identified a number of similar activities performed by both USDA and other agencies through our analysis of USDA and other agencies’ budget functions, mission statements, strategic and annual performance plans, and other agency documents, as well as the U.S. Government Manual and past GAO reports. For example, food inspection services are provided by both USDA’s Food Safety and Inspection Service and HHS’ Food and Drug Administration; land management activities are carried out by the Forest Service and three agencies within the Department of the Interior; and statistical activities are carried out by USDA’s National Agricultural Statistics and Economic Research Services and at least nine other federal agencies. These apparent similarities and others related to international trade, economic development, rural housing, and nutrition are discussed in greater detail below and in appendixes I through VII. USDA’s Food Safety and Inspection Service—in USDA’s food safety mission area (see app. I)—regulates the safety, wholesomeness, and proper labeling of most domestic and imported meat and poultry sold for human consumption. The Food and Drug Administration, through its inspection activities, is similar to FSIS in the way it carries out its responsibilities for ensuring that domestic and imported food products—except for most meats and poultry—are safe, sanitary, nutritious, and wholesome and are honestly labeled. We have reported that this division of responsibility is ineffective and inefficient and have recommended the formation of a single food safety agency. On August 25, 1998, the President issued an executive order establishing the President’s Council on Food Safety to develop a comprehensive strategic plan for federal food safety activities, including a coordinated food safety budget. USDA’s Forest Service—part of USDA’s Natural Resources and Environment mission area (see app. II)—is responsible for sustaining the health, productivity, and diversity of the nation’s forests and rangelands. At least three other federal agencies—the Bureau of Land Management, the Fish and Wildlife Service, and the National Park Service within the Department of the Interior—perform some similar land management activities. We have reported that the responsibilities of these four major federal land management agencies have grown more alike over time. Because these agencies perform numerous similar activities and have complex and sometimes conflicting laws governing their land management activities, we have concluded that these activities could be carried out more efficiently and effectively either by combining the agencies or by streamlining the existing structure through the coordination and integration of functions, activities, and field locations. USDA’s Foreign Agricultural Service—part of USDA’s Farm and Foreign Agricultural Services mission area (see app. III)—serves U.S. agriculture’s international interests by expanding export opportunities for U.S. agricultural, fish, and forest products. At least two other federal agencies are also involved in international trade. The Department of Commerce’s International Trade Administration promotes U.S. exports. The U.S. Trade and Development Agency—an independent federal agency—helps U.S. companies, including those involved in agriculture, pursue overseas business opportunities. We have reported that federal export activities are fragmented among several agencies and could better serve the nation’s business interests through closer cooperation. Currently, USDA is part of an interagency Trade Promotion Coordinating Committee, along with the Departments of State and Commerce, that has been charged with developing a governmentwide strategic plan for strengthening federal export promotion services. According to USDA officials, improvements in coordination have been made with the other agencies. USDA’s Rural Business-Cooperative Service—part of the Department’s Rural Development mission area (see app. IV)—provides loans and grants for economic and business development in rural communities. At least four other agencies—the Department of Commerce’s Economic Development Administration, the Department of Housing and Urban Development (HUD), the Small Business Administration, and the Appalachian Regional Commission—provide similar services. All of these agencies provide loans and/or grants for the economic development of communities throughout the nation. However, while the activities of some of these agencies, such as RBS, are national in scope, others have a more narrowly focused clientele. For example, the Appalachian Regional Commission supports economic development only in Appalachia. USDA’s Rural Housing Service—part of USDA’s Rural Development mission area (see app. IV)—provides direct and guaranteed housing loans to borrowers in rural communities. HUD and the Department of Veterans Affairs perform similar activities, but their clienteles are somewhat different. We have reported that although a number of other federal programs share in HUD’s mission to assist households that may be underserved by the private market, none reach as many households as does HUD’s Federal Housing Administration (FHA). USDA’s National Agricultural Statistics Service (NASS)—in USDA’s Research, Education and Economics mission area (see app. V)—is responsible for serving agriculture and its rural communities by providing objective statistical information and services. There are 11 principal federal statistical agencies, including NASS and USDA’s Economic Research Service. We have reported that while this decentralized system contributes to inefficiency, consolidating this function could result in diminished responsiveness to some customers and possible objections to the concentration of data in a single agency. USDA’s Food and Nutrition Service—part of the Department’s Food, Nutrition, and Consumer Services mission area (see app. VI)—provides children and needy families with access to a more healthful diet through its food assistance programs and nutrition education efforts. HHS performs some similar food assistance and nutrition education activities. For example, HHS’ Maternal and Child Health Bureau provides nutrition education activities that are similar to those of FNS’ Special Supplemental Nutrition Program for Women, Infants, and Children program. Both agencies provide funding to the states to meet the nutritional and developmental needs of mothers and children. In addition, both HHS and FNS conduct similar activities to improve the nutrition of the elderly. We have reported that one alternative to reducing costs and streamlining operations in USDA would be to consolidate the meal programs for the elderly in HHS, thereby giving the funding responsibility to the agency that provides the most funding and has overall oversight responsibilities for meal programs for the elderly. The Government Performance and Results Act of 1993 seeks to focus government decision-making and accountability on the results of activities. The act requires federal agencies to prepare annual performance plans, including an explanation of how similar activities will be coordinated with other agencies. As we reported in June 1998, while the plans of most of USDA’s component agencies at least partially discussed the need to coordinate with agencies having related strategic or performance goals, many of these fiscal year 1999 annual performance plans did not explain how this coordination would be accomplished. For example, although the Forest Service’s performance plan emphasized efforts to ensure sustainable ecosystems, it did not discuss how the Service would coordinate its efforts with those of other agencies having a similar goal, including the Natural Resources Conservation Service; the Environmental Protection Agency (EPA); the Department of Interior; state conservation agencies; or environmental, timber, and industrial organizations. We used several methods to identify similar activities at USDA and other federal agencies. We compared USDA’s expenditures by budget function and subfunction with those of other federal agencies. We also compared USDA’s missions, objectives, and goals with those of other departments, as identified in the departments’ strategic and performance plans. We reviewed the U.S. Government Manual, agency documents, and other pertinent documents to determine other activities agencies conduct. Finally, we reviewed prior GAO reports that dealt with these particular agencies and issues. Individually, these methods have some limitations, as discussed below. However, collectively, these methods allowed us to identify most of the more significant similarities between USDA and other agencies. Budget function and subfunction classifications are intended to provide a means of identifying budget data according to the major purpose served. Since 1979, the Office of Management and Budget has tried to use subfunctions to more discretely portray the missions of the federal government. However, in some cases, this process aggregates very different activities. For example, USDA’s Food Safety and Inspection Service is categorized under Consumer and Occupational Health and Safety along with other agencies, such as the Consumer Product Safety Commission and the Department of Labor’s Mine Safety and Health Administration, which have no activities related to food safety. This process also leaves out agencies that previous GAO reports have identified as conducting activities concerning food safety, such as the Centers for Disease Control and Prevention and EPA. We also reviewed USDA’s and some of the other agencies’ strategic and performance plans to identify similar activities. While we found similar missions and objectives, this review did not produce information on whether the activities these agencies performed were similar. For example, while the Forest Service and the Bureau of Land Management have a very similar mission, only some of the activities conducted by these agencies are similar. Finally, we reviewed the U.S. Government Manual, agency documents, other relevant documents, and prior GAO reports to supplement the information we found from our analysis of budgets and strategic plans. We have written a number of reports on selected aspects of the responsibilities and performances of USDA’s agencies. We discuss many of these reports’ findings on similarities in agencies’ activities in appendixes I through VII. Our analysis highlights many of the activities that are apparently similar but does not determine all of the similar activities nor the extent of any overlap. To make such determinations would require a substantially more detailed analysis, which was beyond the scope of our review. We conducted our work from June 1998 through December 1998 in accordance with generally accepted government auditing standards. We met with USDA officials, including the Director of Budget and officials from related mission areas. USDA generally agreed with our presentation of the agencies’ activities. However, the officials expressed concern that the report could be somewhat misleading. They believed that USDA’s activities were different from the activities of other federal agencies in terms of the clientele served and the precise services or assistance provided. They suggested that the report’s presentation could be improved by clarifying the definition of similar activities, the extent to which these similar activities were part of the agencies’ overall mission, and the differences in the clientele served by the agencies. They also suggested some technical changes. We made modifications to the report as appropriate to reflect these concerns and suggestions, including clarifying that the activities we classify as similar may not be directed at the same clientele and may also be only a part of the overall mission of the other federal agencies. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will provide copies of this report to the Senate Committee on Agriculture, Nutrition, and Forestry and the House Committee on Agriculture; other interested congressional committees; the Secretary of Agriculture; and the Director, Office of Management and Budget. We will also make copies available to others on request. Please call me at (202) 512-5138 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix VIII. The Food Safety Mission Area includes the Food Safety and Inspection Service (FSIS). This mission area represented 1 percent of USDA’s fiscal year 1997 budget. FSIS’ mission is to ensure that meat, poultry, and egg products are wholesome, unadulterated, and properly labeled and packaged. FSIS conducts inspections at meat, poultry, and other processing plants. The U.S. Department of Agriculture (USDA) has been involved in food safety since the late 1800s, when it began investigating food adulteration. Many FSIS activities originated with the Meat Inspection Act of 1906, which was passed in response to unsanitary conditions in meat-packing houses. The Food and Drug Administration (FDA) also conducts inspections at food processing plants—except for most meat and poultry plants—to ensure that food products are safe, sanitary, nutritious, wholesome, and are honestly labeled. We have reported that this division of responsibility is ineffective and inefficient and have recommended the formation of a single food safety agency. On August 25, 1998, the President issued an Executive Order establishing a President’s Council on Food Safety to develop a comprehensive strategic plan for federal food safety activities, including a coordinated food safety budget. For further information, see Food Safety: Weak and Inconsistently Applied Controls Allow Unsafe Imported Food to Enter U.S. Commerce (GAO/T-RCED-98-271, Sept. 10, 1998); Food Safety: Opportunities to Redirect Federal Resources and Funds Can Enhance Effectiveness (GAO/RCED-98-224, Aug. 6, 1998); Food Safety: Federal Efforts to Ensure Imported Food Safety Are Inconsistent and Unreliable (GAO/T-RCED-98-191, May 14, 1998); Food Safety: Federal Efforts to Ensure the Safety of Imported Foods Are Inconsistent and Unreliable (GAO/RCED-98-103, Apr. 30, 1998); and Food Safety: Fundamental Changes Needed to Improve the Nation’s Food Safety System (GAO/T-RCED-98-24, Oct. 8, 1997). The Natural Resources and Environment Mission Area is composed of the Forest Service (FS) and the Natural Resources Conservation Service (NRCS). This mission area represented 6 percent of USDA’s fiscal year 1997 budget. FS’ mission is to sustain the health, productivity, and diversity of the nation’s forests and rangelands. FS uses multiple-use management of these lands to produce sustained yields for renewable resources such as wood, water, forage, and wildlife, and to provide recreation to meet the diverse needs of people. FS also conducts research, provides assistance to state and private landowners, assesses the nation’s natural resources, and provides international assistance and scientific exchanges. FS was formed in 1905 when the jurisdiction over the National Forests was transferred from the Department of the Interior to USDA. The Bureau of Land Management, the Fish and Wildlife Service, and the National Park Service—all within the Department of the Interior—perform some land management activities that have become similar over time to those conducted by FS, as we have reported. Because the agencies perform numerous similar activities and have complex and sometimes conflicting laws governing their land management activities, we have concluded that improving the efficiency and effectiveness of land management activities could occur either by combining agencies or by streamlining the existing structure through the coordination and integration of functions, activities, and field locations. For further information, see Forest Service Decision-Making: A Framework for Improving Performance (GAO/RCED-97-71, Apr. 29, 1997); Federal Land Management: Streamlining and Reorganization Issues (GAO/T-RCED-96-209, June 27, 1996); Ecosystem Management: Additional Actions Needed to Adequately Test a Promising Approach (GAO/RCED-94-111, Aug. 16, 1994). NRCS’ mission is to assist farmers and ranchers in protecting soil, water, and related resources while sustaining the profitable production of food and fiber. The activities of NRCS, formerly the Soil Conservation Service, include providing technical assistance to individuals; communities; watershed groups; tribal governments; federal, state and local agencies; and others. NRCS also develops conservation standards, which are specifications and guidelines to ensure that the conservation systems recommended to landowners and communities nationwide are technically sound. The Department of the Interior’s Bureau of Reclamation performs some activities similar to NRCS’. The Bureau’s activities include the management, development, and protection of water and related resources. In addition, we have reported that 72 federal programs directly or indirectly support water quality protection, including the Environmental Quality Incentives Program, administered by NRCS, which provides cost-share payments to landowners for, among other things, the protection of water and related resources. For further information, see Water Quality: A Catalog of Related Federal Programs (GAO/RCED-96-173, June 19, 1996). The Farm and Foreign Agricultural Services Mission Area includes the Foreign Agricultural Service (FAS), the Farm Service Agency (FSA), and the Risk Management Agency (RMA). This mission area represented 27 percent of USDA’s fiscal year 1997 budget. FAS’ mission is to serve U.S. agriculture’s interests by expanding export opportunities for U.S. agricultural, fish, and forest products and promoting world food security. Established as an agency in 1953, FAS administers a variety of export promotion, technical, and food assistance programs around the world in cooperation with other federal, state, and local agencies as well as private sector and international organizations. FAS also collects, analyzes, and disseminates agricultural information about global supply and demand, trade trends, and emerging market opportunities. The Department of Commerce’s International Trade Administration and the U.S. Trade and Development Agency perform some activities similar to FAS’. The International Trade Administration promotes U.S. exports and U.S. businesses’ access to foreign markets on behalf of all U.S. business interests, not just agriculture. The Trade and Development Agency assists in creating jobs for Americans by helping U.S. companies pursue overseas business opportunities, including agriculture. We have reported that federal export activities are fragmented among several agencies and could better serve the nation’s business interests through closer cooperation. USDA is part of an interagency Trade Promotion Coordinating Committee, along with the Departments of State and Commerce, that has been charged with developing a governmentwide strategic plan for strengthening federal services to promote exports. For further information, see Export Promotion: Governmentwide Plan Contributes to Improvements (GAO/T-GGD-94-35, Oct. 26, 1993); Export Promotion: Initial Assessment of Governmentwide Strategic Plan (GAO/T-GGD-93-48, Sept. 29, 1993); Export Promotion: Governmentwide Strategy Needed for Federal Programs (GAO/T-GGD-93-7, Mar. 25, 1993). FSA’s mission is to ensure the well-being of American agriculture and the American public through the administration of programs for farm commodities, farm loans, conservation, emergency assistance, and domestic and international food assistance. A number of these programs can be traced to the Great Depression, when many farmers were struggling to survive financially, in part because high productivity was lowering the prices they received for their crops. These programs were designed to help raise agricultural prices, increase farm income, and improve the quality of life in rural America. While most of FSA’s activities are not similar to those of other federal agencies, its farm lending services are in some ways similar to those of the Farm Credit System. The Farm Credit System is a federally chartered network of borrower-owned lending institutions and related service organizations. These lending institutions specialize in providing credit-related services to creditworthy farmers, ranchers, and producers. FSA lends to farmers who do not qualify for loans from the Farm Credit System and other commercial lenders. In addition, the Federal Emergency Management Agency and the Small Business Administration conduct some similar disaster assistance activities, but their clientele are different. The Federal Emergency Management Agency provides low-interest loans following natural disasters to cover expenses not covered by state or local programs or private insurance. Similarly, the Small Business Administration has several programs to help businesses and homeowners recover from disasters. For example, its Economic Injury Disaster Loans program provides working capital to small businesses and agricultural cooperatives to assist them in recovering from disasters. RMA’s mission is to provide and support cost-effective means for managing risk for agricultural producers in order to improve the economic stability of agriculture. RMA provides producers with a variety of crop and revenue insurance programs through the Federal Crop Insurance Corporation (FCIC). These programs are offered primarily through private companies that contract with and are reinsured by FCIC. Typically, federal crop insurance covers unavoidable production losses resulting from any adverse weather conditions, including drought, excessive rain, hail, wind, hurricanes, tornadoes, and lightning. In some cases, it also covers unavoidable losses as a result of insect infestation, plant disease, floods, fires, and earthquakes. While other federal agencies provide other types of insurance, such as flood insurance, no other federal agencies provide crop insurance. The Rural Development Mission Area consists of the Rural Housing Service (RHS), Rural Business-Cooperative Service (RBS), and the Rural Utilities Service (RUS). This mission area represented 13 percent of USDA’s fiscal year 1997 budget. RHS’ mission is to enhance the quality of life in rural America and help build competitive, vibrant rural communities through its community facilities and housing programs. RHS administers direct and guaranteed housing loan programs for moderate- and low-income rural residents, as well as grants to public and quasi-public organizations, nonprofit associations, and certain Indian tribes, for essential community facilities, such as health care, public safety, and public service. With the passage of the Housing Act of 1949, USDA was authorized to provide loans to help farmers build or repair houses and other farm buildings. Over time, the act has been amended to authorize housing loans and grants to rural residents in general. The Department of Housing and Urban Development (HUD) and the Department of Veterans Affairs conduct some activities similar to RHS’. While all these agencies provide affordable housing, their clientele are somewhat different. For example, HUD provides loans primarily to individuals in urban areas, Veterans Affairs to veterans, and RHS to rural communities. We have reported that although a number of other federal programs share HUD’s mission to assist households that may be underserved by the private market, none reach as many households as HUD’s Federal Housing Administration. For further information, see Rural Housing Programs: Opportunities Exist for Cost Savings and Management Improvement, (GAO/RCED-96-11, Nov. 16, 1995); Homeownership: FHA’s Role in Helping People Obtain Home Mortgages, (GAO/RCED-96-123, Aug. 13, 1996). RBS’ mission is to provide leadership in building competitive businesses and sustainable cooperatives that can prosper in the global marketplace. RBS invests its financial resources and technical assistance in businesses and cooperatives and builds partnerships to leverage public, private, and cooperative resources to create jobs and stimulate rural economic activity. The Department of Commerce’s Economic Development Administration, HUD, and several independent agencies, such as the Small Business Administration and the Appalachian Regional Commission, conduct some activities similar to RBS’. All of these agencies provide loans and/or grants for the economic development of communities throughout the nation. However, while the activities of some of these agencies, such as RBS, are national in scope, others have a more narrowly focused clientele. For example, the Appalachian Regional Commission supports economic development only in Appalachia. For further information, see Economic Development Activities: Overview of Eight Federal Programs (GAO/RCED-97-193, Aug. 28, 1997); Economic Development: Limited Information Exists on the Impact of Assistance Provided by Three Agencies (GAO/RCED-96-103, Apr. 3, 1996); Economic Development Programs (GAO/RCED-95-251R, July 28, 1995); Rural Development: Federal Programs That Focus on Rural America and Its Economic Development (GAO/RCED-89-56R, Jan. 19, 1989); Rural Development: Availability of Capital for Agriculture, Business, and Infrastructure (GAO/RCED-97-109, May 27, 1997). RUS’ mission is to serve a leading role in improving the quality of life in rural America by administering its electric, telecommunications, and water and waste programs. RUS’ activities include providing loans and grants primarily to (1) electric and telephone cooperatives to deliver electric and telecommunications services to rural areas and (2) public bodies and nonprofit associations to provide water and waste water disposal. These activities originated in the 1930s when only 13 percent of U.S. farms had electricity, only 34 percent had any form of telephone service, and many rural communities did not have safe drinking water. A number of other federal agencies provide some similar telecommunications and wastewater activities to support rural communities. The Departments of Commerce, Defense, Education, Health and Human Services (HHS), Justice, and Veterans Affairs, as well as the National Aeronautics and Space Administration (NASA), National Science Foundation, and Appalachian Regional Commission conduct or sponsor telecommunications activities, including distance learning and/or telemedicine initiatives; and EPA, HUD, HHS, and Commerce provide federal funding and technical assistance to help small communities plan, design, and build water and wastewater systems. In addition, although not in the form of federal assistance, the Department of Energy’s Power Marketing Administrations—such as the Bonneville Power Administration—and the Tennessee Valley Authority sell electricity to rural communities. We previously reported that in December 1995 at least 28 federal programs administered by 15 federal agencies provided funds that were either specifically designated for telecommunication projects in rural areas or could be used for that purpose. In 1995, we reported that 17 different programs administered by eight federal agencies provided funds that were designed specifically for, or that could be used by, rural areas for constructing, expanding, or repairing water and wastewater facilities. For further information, see Rural Development: Financial Condition of the Rural Utilities Service’s Loan Portfolio (GAO/RCED-97-82, Apr. 11, 1997); Rural Utilities Service: Opportunities to Operate Electricity and Telecommunications Loan Programs More Effectively (GAO/RCED-98-42, Jan. 21, 1998); Federal Electricity Activities: The Federal Government’s Net Cost and Potential for Future Losses, Volumes 1 and 2 (GAO/AIMD-97-110 and GAO/AIMD-97-110A, Sept. 19, 1997); Rural Development: Steps Towards Realizing the Potential of Telecommunications Technologies (GAO/RCED-96-155, June 14, 1996); Rural Development: Patchwork of Federal Water and Sewer Programs Is Difficult to Use (GAO/RCED-95-160BR), Apr. 13, 1995); Telemedicine: Federal Strategy Is Needed to Guide Investments (GAO/NSIAD/HEHS-97-67, Feb. 14, 1997). The Research, Education, and Economics Mission Area includes the National Agricultural Statistics Service (NASS), the Agricultural Research Service (ARS), the Cooperative State Research, Education, and Extension Service (CSREES), and the Economic Research Service (ERS). This mission area represented 3 percent of USDA’s fiscal year 1997 budget. NASS’ mission is to serve U.S. agriculture and its rural communities by providing objective statistical information and services. NASS collects and disseminates agricultural statistics, including the Census of Agriculture. NASS carries out many of its activities with the support of state departments of agriculture, land-grant universities, and the agricultural industry through cooperative agreements that provide financial support and are also designed to prevent duplication of effort in acquiring data from farmers and in setting estimates of states’ agricultural production. At least 10 other agencies in the federal government (including ERS within USDA) conduct some activities related to statistics. We have reported that while this decentralized system contributes to inefficiency, the consolidation of this function could result in diminished responsiveness to some customers and possible objections to the concentration of data in a single agency. For further information, see Statistical Agencies: Consolidation and Quality Issues (GAO/T-GGD-97-78, Apr. 9, 1997). ARS, USDA’s principal in-house research agency, has as its primary mission conducting research to develop and transfer solutions to agricultural problems of high national priority. The research is designed to (1) ensure the quality and safety of food and other agricultural products, (2) assess the nutritional needs of Americans, (3) sustain a competitive agricultural economy, (4) enhance the natural resource base and the environment, and (5) provide economic opportunities for rural citizens, communities, and society as a whole. While other federal agencies—the Departments of Commerce, Defense, Energy, HHS, the Interior, and Transportation, and EPA, and NASA—conduct research activities, none perform similar agricultural research activities. In a 1995 review of federal research laboratories, we found 515 separate federal research and development laboratories, including those operated by contractors, in 17 federal departments and agencies. USDA reported the largest number of laboratories (185). However, laboratories for Defense, Energy, HHS, and NASA accounted for 88 percent of the funding. For further information, see Federal R & D Laboratories (GAO/RCED/NSIAD-96-78R, Feb. 29, 1996). Unlike ARS, which performs research, CSREES administers grants for agricultural research, extension and higher education at colleges, universities, and other institutions—both public and private—around the nation. CSREES provides funding to scientists to support research on such matters as biological, environmental, physical, and social sciences relevant to agriculture and food and the environment. We have reported that CSREES provides research funds for activities, such as water quality protection and/or enhancement, that other federal agencies are involved in. For further information, see Water Quality: A Catalog of Related Federal Programs (GAO/RCED-96-173, June 19, 1996). ERS’ mission is to provide economic analysis on issues related to agriculture, food, the environment, and rural development to assist public and private decision makers. ERS’ mission has its antecedents in USDA’s efforts in the early 1900s to examine farm management issues, reflecting a new interest in economic questions relating to agriculture. Other federal agencies conduct economic analysis. However, ERS is the primary agency that analyzes agricultural activities. This mission area includes the Food and Nutrition Service (FNS), which administers 15 domestic food assistance programs, and the Center for Nutrition Policy and Promotion (CNPP), which coordinates nutrition policy in USDA. This mission area represented 47 percent of USDA’s fiscal year 1997 budget. FNS’ mission is to provide children and needy families with access to a more healthful diet through its food assistance programs and nutrition education. To carry out this mission, FNS administers 15 separate domestic food assistance programs—the largest being the Food Stamp Program, which provides employment and training as well as nutrition assistance—in partnership with the states. HHS conducts some similar food assistance and nutrition activities. For example, HHS’ Maternal and Child Health Bureau provides nutrition education activities that are similar to FNS’ program—the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC). Both agencies provide funding to the states to meet the nutritional and developmental needs of mothers and children. In addition, both HHS and FNS conduct similar activities to improve the nutrition of the elderly. FNS provides subsidies—cash and/or commodity food reimbursements—to nutrition programs that provide meals to the elderly in a group setting or in their home, while HHS, for the most part, administers the program and provides most of the funding for these programs. We previously reported that one alternative to reducing costs and streamlining operations in USDA would be to consolidate the meal programs for the elderly in HHS, thereby giving the funding responsibility to the agency that provides the most funding and has overall oversight responsibilities for meal programs for the elderly. Furthermore, like FNS’ Food Stamp Program, other federal programs provide employment and training programs. These programs include, for example, HHS’ Temporary Assistance for Needy Families, the Department of Labor’s Job Training Partnership Act Program, and HUD’s Family Self-Sufficiency Program. We have reported that one way to reduce the cost of the Food Stamp Program would be to eliminate its employment and training component since the services could be provided by other existing employment and training programs. For further information, see Food Assistance: USDA’s Multiprogram Approach (GAO/RCED-94-33, Nov. 24, 1993); Food Assistance Programs (GAO/RCED-95-115R, Feb. 28, 1995); Multiple Employment Training Programs: Major Overhaul Needed to Create a More Efficient, Customer-Driven System (GAO/T-HEHS-95-70, Feb. 6, 1995). CNPP is responsible for improving the nutritional status of Americans by serving as the focal point within USDA for linking scientific research to the consumer. CNPP develops and coordinates nutrition policy within USDA, assesses the cost-effectiveness of government-sponsored nutrition programs, periodically reports on the cost of family food plans and of raising children, investigates techniques for communicating effectively with Americans about nutrition, and evaluates the nutrient content of the U.S. food supply. While other federal agencies conduct, or contract to conduct, nutrition research projects, CNPP actually translates nutrition research into materials for health professionals, corporations, and consumers. For further information, see Food Assistance: Information on USDA’s Research Activities (GAO/RCED-98-56R, Jan. 29, 1998). The Marketing and Regulatory Programs Mission Area consists of three agencies—Agricultural Marketing Service (AMS), Animal and Plant Health Inspection Service (APHIS), and Grain Inspection, Packers and Stockyards Administration (GIPSA). This mission area represented about 2 percent USDA’s fiscal year 1997 budget. AMS’ mission is to facilitate the strategic marketing of agricultural products in domestic and international markets, ensure fair trading practices, and promote a competitive and efficient marketplace to the benefit of producers, traders, and consumers of U.S. food and fiber products. To carry out its mission, AMS engages in a number of activities, such as collecting and disseminating time-sensitive agricultural market information, grading and certifying the quality of agricultural commodities, overseeing industry-financed research and promotion programs, implementing national organic production and labeling standards, and administering the milk marketing order program. The agency also administers a regulatory program covering dealers in the fruit and vegetable industry to promote fair trading. Three other federal agencies perform some activities similar to AMS’. The National Marine Fisheries Service in the Department of Commerce conducts, on a fee-for-service basis, a voluntary seafood inspection and grading program that focuses on marketing and the quality attributes of U.S. fish and shellfish. The National Institute of Standards and Technology, also in the Department of Commerce, promotes overall U.S. economic growth by working with industry to develop and apply technology, measurements, and standards, although it has no specific responsibilities in the agricultural area, and it does not provide grading services as AMS does. The Federal Trade Commission, an independent agency, also administers regulatory programs to promote fair trading practices, but its programs are aimed at protecting consumers rather than dealers. APHIS’ mission is to anticipate and respond to issues involving animal and plant health, conflicts with wildlife, environmental stewardship, and animal well-being. APHIS regulates the import of agricultural products into the United States to reduce the risk posed by exotic pests and diseases; monitors animal and plant health to detect endemic and exotic diseases and pests; conducts regulatory activities to ensure the humane care of animals used in research, exhibition, or the wholesale pet trade; provides federal leadership in managing problems caused by animal pests and diseases and wildlife; and ensures that veterinary biological products are safe, pure, potent, and effective. Its core functions and activities originated in the 1880s after outbreaks of contagious animal diseases led to the barring of U.S. meat from some European markets. FDA’s Center for Veterinary Medicine evaluates and approves animal drug products to protect animal and human health. GIPSA, which is made up of the former Federal Grain Inspection Service and the former Packers and Stockyards Administration, has as its mission facilitating the marketing of livestock, poultry, meat, cereals, oilseeds, and related agricultural products and the promotion of fair and competitive trading practices for the overall benefit of consumers and American agriculture. GIPSA sets quality standards, provides inspection and weighing services, and enforces the Packers and Stockyards Act. This act protects members of the livestock, poultry, and meat industries against unfair or monopolistic practices. It also protects consumers against unfair business practices in the marketing of meats and poultry. Two other federal agencies perform some activities similar to GIPSA’s. As discussed earlier, the Department of Commerce’s National Institute of Standards and Technology promotes overall U.S. economic growth by working with industry to develop and apply technology, measurements, and standards, but it has no specific responsibilities in the agricultural area and does not carry out actual weighing and grading activities as does GIPSA. The Federal Trade Commission also enforces laws to prevents fraud, deception, and unfair business practices in the marketplace and to prevent anticompetitive mergers and other anticompetitive business practices in the marketplace, activities that are similar to the packers and stockyards activities performed by GIPSA. Ronald E. Maxon, Jr., Assistant Director Fred Light Renee McGhee-Lenart Paul Pansini Carol Herrnstadt Shulman Janice Turner The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Department of Agriculture (USDA) activities that are similar to the activities conducted by other federal agencies and discussed USDA's efforts to comply with the requirements of the Government Performance and Results Act. GAO noted that: (1) many of USDA's activities appear to be similar to those of other federal agencies; (2) for example, food inspection services are provided by both USDA's Food Safety and Inspection Service and the Department of Health and Human Services' Food and Drug Administration; (3) GAO has reported on the fundamental management problems some of these similarities create for USDA and has, in some cases, recommended organizational changes; (4) for example, some of the land management activities of USDA's Forest Service and of the Department of the Interior's Bureau of Land Management, National Park Service, and Fish and Wildlife Service are similar; (5) GAO has reported that land management activities could be carried out more efficiently and effectively either by combining these agencies or by coordinating and integrating their functions, activities, and field locations; (6) the Results Act was designed in part to help address apparent similarities in agencies' activities by requiring federal agencies to prepare annual performance plans; and (7) however, as GAO reported in June 1998, while most of USDA's component agencies' plans at least partially discussed the need to coordinate with the agencies having related strategic or performance goals, the Department's fiscal year 1999 annual performance plan did not explain how USDA agencies are coordinating crosscutting issues both within and outside the Department.
VA pays monthly disability compensation to veterans with service- connected disabilities (i.e., injuries or diseases incurred or aggravated while on active military duty) according to the severity of the disability. VA also pays additional compensation for some dependents––spouses, children, and parents––of veterans. In addition, VA’s pension program pays benefits to low-income veterans who either are elderly or have disabilities unrelated to their military service. In fiscal year 2008, the disability compensation program represented 78 percent, or $30.7 billion, of the cash benefits paid through VBA’s Compensation and Pension Service. VA’s disability compensation claims process starts when a veteran submits a claim to VBA (see fig. 1). Upon reviewing the claim at 1 of VBA’s 57 regional offices, a service representative then assists the veteran in gathering the relevant evidence to evaluate the claim. Such evidence includes veterans’ military service records, medical examinations, and treatment records from VA medical facilities and private medical service providers. Also, if necessary for reaching a decision on a claim, the regional office arranges for the veteran to receive a medical examination. Once a claim has all of the necessary evidence, a rating specialist evaluates the claim and determines whether the claimant is eligible for benefits. If so, the rating specialist assigns a percentage rating. Veterans with multiple disabilities receive a single composite rating. Veterans can reopen claims for additional benefits from VA if, for example, a service- connected disability worsens or arises in the future. If the veteran disagrees with the regional office’s decision, he or she may submit a written notice of disagreement to the regional office. In response to such a notice, VBA reviews the case and provides the veteran with further written explanation of the decision if VBA does not grant all appealed issues. If the veteran still disagrees, he or she may appeal to the Board. Before transferring the appeal to the Board, VBA re-reviews the case and if any new information is obtained provides a new explanation of the decision to the veteran. The Board, whose members are attorneys experienced in veterans’ law and in reviewing benefit claims, conducts a hearing if the veteran requests one, then grants or denies the appeal or returns the case to VBA to obtain additional evidence necessary to decide the veteran’s claim. If the veteran is dissatisfied with the Board’s decision, he or she may appeal to the U.S. Court of Appeals for Veterans Claims. To improve workload controls and the timeliness and accuracy of its decisions, in fiscal year 2002, VBA organized its claims processing staff by teams that perform distinct phases of the claims and appeals processes (see table 1). In moving toward this organizational structure, VBA sought to reduce the number of tasks a veteran service representative was expected to perform and thereby improve its performance. VA measures its performance related to compensation claims and appeals processing in various ways and considers the timeliness and quality of its decisions as key indicators. One way that VBA and the Board assess the timeliness of their work is using a joint measure that considers the average time it takes appeals to be resolved, regardless of whether they are resolved by VBA or the Board. In fiscal year 2009, VA’s timeliness goal for resolving appeals was 675 days. In terms of quality, VBA and the Board each assess the accuracy of their decisions by reviewing randomly selected cases to determine the proportion that contain errors that could affect the benefits paid to the veteran. In fiscal year 2009, VBA and the Board had an accuracy rate goal of 98 percent and 94 percent, respectively. Over the past several years, the number of disability compensation claims has increased, and VA’s performance in processing such claims has improved in some areas and worsened in others. During this time, VA has reduced the number of pending appeals and improved the accuracy of some appellate work, but in recent years, the time that it takes to resolve appeals has increased. From fiscal years 2000 to 2008, the number of claims completed annually by VA has increased but not by enough to keep pace with the increasing number of compensation claims it has received, and, as a result, the number of pending claims has grown. VA has substantially increased the number of claims it completes annually in recent years. In fiscal year 2008, VA completed about 729,000 claims, which was nearly 66 percent more than it completed in fiscal year 2000 (see fig. 2). However, VA has also received significantly more claims in recent years. In fiscal year 2008, VA received about 719,000 compensation claims, which was about 71 percent more than it received in fiscal year 2000. By the end of fiscal year 2008, pending claims—those awaiting a decision—had increased 83 percent over fiscal year 2000 levels, from about 188,000 to about 343,000 (see fig. 3). Moreover, the number of claims awaiting a decision longer than 6 months increased about 50 percent, from about 52,000 to about 78,000. VA has also experienced mixed results in improving the timeliness of its claims decisions. Overall, the average days that claims were pending declined, but the average processing time needed to complete a claim did not improve. From fiscal years 2000 to 2008, the average number of days that claims were pending fluctuated. In fiscal year 2008, compensation claims were pending an average of 23 days less than the 146 days in fiscal year 2000 (see fig. 4). While fiscal year 2008’s average number of days pending was slightly longer than the average 115 days experienced in fiscal year 2003, it is a marked improvement over the 188 days that claims were pending in fiscal year 2001. VA has also reduced the percentage of claims that took more than 1 year to complete, from 22 percent in fiscal year 2002 to 10 percent in fiscal year 2008. However, VA has made little progress in reducing average processing times. The average time that VA took to complete a claim fluctuated between fiscal years 2000 and 2008, from a high of 246 days in fiscal year 2002 to a low of 181 days in fiscal years 2004 and 2005 (see fig. 5). Since then, this average has increased, and in fiscal year 2008, VA took about the same amount of time—196 days—to complete a claim as it did in fiscal year 2000. In terms of quality, according to VA’s assessments, the accuracy of compensation claims processing remained about the same during fiscal years 2003 through 2008. The percentage of compensation claims processed without errors that could affect benefits paid to veterans remained at 85 percent, varying slightly in the intervening years (see fig. 6). One factor that has contributed to VA’s lack of significant improvement in claims processing performance is the substantial increase in VA’s disability workloads. VA attributes the increase in compensation claims to several sources, including the conflicts in Iraq and Afghanistan. According to VA, about 35 percent of veterans from ongoing hostilities file claims. In addition, VA cites the growing number of reopened claims from current disability benefit recipients—many of whom suffer from chronic progressive disabilities, such as diabetes—who submit claims for increased benefits as their conditions worsen or new conditions arise as they age. In fiscal year 2008, VA received about 488,000 reopened claims for disability benefits, up 58 percent from about 309,000 in fiscal year 2000. In addition, VA attributes increased claims receipt to its enhanced outreach to servicemembers and veterans. VA reported that in fiscal year 2007, it provided benefits briefings to about 297,000 separating servicemembers, which was up from about 210,000 in fiscal year 2003. According to VA officials, federal laws, VA regulations, and court decisions have also adversely affected claims processing timeliness. These changes enable veterans to get the benefits they deserve. However, the changes expand benefit entitlement and add processing requirements that increase VA’s workloads. In recent years, court decisions related to a 1991 law have created new presumptions of service-connected disabilities for many Vietnam veterans. In October 2009, VA announced that it was expanding the list of presumptive service-connected disabilities to include Parkinson’s disease and two other conditions for Vietnam veterans. VA also anticipates an increase in claims stemming from an October 2008 regulation change that affects how VA rates traumatic brain injuries. According to a VA official, a letter was sent to approximately 32,000 veterans notifying them that their rating for traumatic brain injury could potentially increase, even though their symptoms may not have changed. In addition to expanded benefit entitlement, a number of laws and court decisions related to VA’s disability claims process have had implications for timely claims processing. For example, according to VA officials, the Veterans Claims Assistance Act of 2000 added more steps to the claims process, lengthening the time that it takes to develop and decide a claim. Another factor impacting VA’s claims processing timeliness is the complexity of claims received. VA notes that it is receiving more claims for complex disabilities related to combat and deployments overseas, including those based on environmental and infectious disease risks and traumatic brain injuries. In addition, veterans cited more disabilities in their claims in recent years than they had in the past. The number of compensation claims VA decided with eight or more disabilities increased from 11 to 16 percent from fiscal years 2006 to 2008. These claims can take longer to complete because each disability must be evaluated separately. Since fiscal year 2000, the number of pending appeals has declined, and the accuracy of appeals processing has improved in some areas. VA has reduced the number of pending appeals by 25 percent, from about 127,000 in fiscal year 2000 to about 95,000 in fiscal year 2008 (see fig. 7). Over the same period, agency accuracy reviews indicate that 95 percent of the Board’s decisions in fiscal year 2008 were processed accurately compared with 86 percent in fiscal year 2000 (see fig. 8). Another indicator of the accuracy of appeals processing is the percentage of appeals that are remanded to VBA by the Board due to errors that could have been avoided. Examples of avoidable remands include VBA’s failure to obtain identified private treatment records or to send letters to claimants indicating what evidence is necessary to substantiate the claim. One of VA’s goals is to eliminate avoidable remands. Although VBA recently expanded accuracy reviews and the Board has provided training to VBA staff based on remand reason trends, the percentage of appeals with avoidable remands remained about 25 percent from fiscal years 2006 to 2008. Despite improvements in some aspects of appeals processing, the average time needed to resolve appeals has worsened in recent years, reversing prior improvements. In fiscal year 2008, the average processing time for compensation appeals was 776 days, or approximately 25 months, despite reaching lows of 656 days in fiscal year 2001 and 680 days in fiscal year 2005 (see fig. 9). The majority of appeal processing time is spent developing the appeal prior to consideration by the Board. For example, appeals resolved in fiscal year 2008 remained at VBA for 502 days before being transferred to the Board. Several factors have contributed to the worsening trend in appeals timeliness. First, the number of appeals that VA has received increased about 50 percent from approximately 24,000 in fiscal year 2000 to about 36,000 in fiscal year 2008 (see fig. 10). In addition, according to VA officials, each time appellants submit new evidence, VA must review and summarize the case for the appellant again, adding to the time that it takes to resolve the appeal. Furthermore, a veteran may submit multiple claims, and VBA does not forward an appeal to the Board until all of a veteran’s pending claims are resolved, regardless of whether they relate to the appeal. This practice follows VBA’s interpretation of a court decision to prevent delays in processing undecided claims. Therefore, a veteran’s unrelated, pending claim could forestall final resolution of the appeal. Finally, according to VA officials, processing time is lengthened when appeals are remanded back to VBA by the Board. While some appeals are remanded due to procedural errors by VBA, many other appeals are remanded because of requirements often driven by recent court decisions or regulatory changes that occur after the appeal is sent to the Board. For example, a court decision in January 2008 required VA to notify veterans seeking increased compensation for worsened conditions of the rating criteria that pertain to the claim. Until this decision was overturned in September 2009, it required the Board to remand—or VBA to hold back— hold back— any appeals until the claimants were notified. any appeals until the claimants were notified. received (in thousand) VA has taken several steps to improve claims processing, including increasing claims processing staff, redistributing certain workloads, piloting alternative approaches to processing certain claims, and increasingly leveraging information technology to process claims. VA expects these actions to improve decision timeliness, quality, or both. However, the effects of these actions are not yet known, and VA lacks plans to assess certain actions. Vazquez-Flores v. Peake, 22 Vet. App. 37 (2008), vacated Vazquez-Flores v. Shinseki, 580 F.3d 1270 (Fed. Cir. 2009). See footnote 24. VA has taken several actions to improve decision timeliness at both the claim and appellate levels. For example, over the past few years, VA has hired a significant number of disability claims staff to process disability workloads. From fiscal years 2005 to 2009, VA increased VBA’s claims processing staff by 57 percent, from 7,550 to 11,868. This increase includes 417 staff that VBA hired in fiscal year 2009 using funds from the American Recovery and Reinvestment Act of 2009 (ARRA). Of the people hired using ARRA funds, about three-fourths are temporary employees who assist in developing disability claims and perform other administrative tasks to free experienced staff to complete more complex claims processing tasks. During the same period, VA increased the Board’s staff by 20 percent, from 433 to 519, without using ARRA funds. As required under section 225 of the Veterans’ Benefits Improvement Act of 2008 (Pub. L. No. 110-389), GAO is in the process of evaluating VA’s training programs for claims processors. This evaluation builds on prior work in which we found that increased focus on evaluation and accountability would enhance training and performance management for claims processors. See GAO, Veterans’ Benefits: Increased Focus on Evaluation and Accountability Would Enhance Training and Performance Management for Claims Processors, GAO-08-561 (Washington, D.C.: May 27, 2008). rating specialists, becoming proficient often takes longer—about 3 years—because of the complexity of the job, in part given the variety of cases and rating issues. Training new staff also reduces productivity in near term because experienced staff must take time to train and mentor them and, therefore, may have less time to process their own claim workloads. According to a VBA training official, VBA has developed curricula that use practical application of key concepts to accelerate the learning curve for new staff. VA expects that the staff hired with ARRA funding will help increase the number of claims processed and reduce average processing times in 2010. However, even though their responsibilities are expected to be limited to less complex claims processing tasks, these additional staff could also pose human capital challenges in the near term while they are being trained and deployed. VBA has also expanded its practice of redistributing regional offices’ disability workloads. Although this expansion could improve the timeliness of its decisions, VBA has not collected data to evaluate the effect of this practice. Since 2001, VBA has created 15 resource centers that are staffed exclusively to process claims or appeals from backlogged regional offices at distinct phases in the claims process. From 2001 to 2002, VBA created 9 resource centers to exclusively rate claims (rating centers) from other offices. Since 2007, VBA has created 4 additional resource centers to exclusively develop claims (development centers) for rating. In 2009, VBA created 2 more resource centers that focus exclusively on processing appealed claims (appeals centers) before they are sent to the Board. The development resource centers obtain information necessary for rating claims, while the appeals resource centers review appeals and provide written summaries of cases for the veterans. VBA determines the number of claims redistributed to each of the resource centers on the basis of the regional offices’ and resource centers’ changing workloads and capacities. Claims initially had to meet specific criteria to be eligible for redistribution, such as having seven or fewer disabilities. However, VBA relaxed these criteria in May 2008 to allow more claims to be redistributed. The number of claims redistributed for rating has increased from about 88,000 in fiscal year 2006 to about 140,000 in fiscal year 2008. While redistributing workloads is helpful, this practice can pose operational challenges. According to several VSO representatives, redistributing claims reduces VSOs’ and VA’s ability to monitor claims processing. Also, according to some resource center staff we interviewed, workload redistribution sometimes creates inefficiencies. For example, one rating resource center returned about 20 percent of the claims that it received during the first half of fiscal year 2009 to the originating regional offices because the claims required further development before they could be rated. The resource centers provide written explanations for returned claims, so that regional offices can correct the errors and avoid them in the future. Despite such challenges, according to VBA officials, redistributing backlogged claims to resource centers improves average processing times because VBA can better leverage the ever-changing capacities of its offices. Although VBA tracks the number of claims processed—and recently started monitoring accuracy—by resource center, it does not track the average processing times of redistributed workloads. Therefore, VA cannot (1) compare the average processing times of redistributed versus nonredistributed claims and (2) assess the resource centers relative to key performance goals or the overall effect of expanded workload redistribution on claims processing. In addition to increasing staffing and redistributing workloads, VA is piloting several new approaches for processing certain claims to improve timeliness. For example, VA is implementing a pilot with the Department of Defense (DOD) to perform disability evaluations. Begun in November 2007, the joint DOD-VA pilot process applies to servicemembers navigating the military’s disability evaluation system, which determines whether servicemembers are fit for duty or should be released from the military. In the pilot, VA completes disability ratings for servicemembers found to be unfit for duty. Key features of the pilot include a single physical examination conducted to VA standards, disability ratings prepared by VA for use by both DOD and VA in determining disability benefits, and additional outreach and case management provided by VA staff at DOD pilot locations to explain VA results and processes to servicemembers. The goals of the pilot are to increase transparency and reduce confusion about the disability evaluations conducted and, if military separation or retirement is necessary, to expedite VA disability compensation benefits upon discharge. If implemented widely, the pilot process could change the way in which many veterans first receive disability benefits from VA. According to DOD, preliminary pilot results suggest that the new process expedites delivery of VA benefits to servicemembers following discharge from the military. However, the number of claims affected by widespread implementation of the DOD-VA pilot process would probably be small compared with the total number of compensation claims processed by VA. In fiscal year 2008, the military’s disability evaluation system caseload was approximately 20,000, while VA processed about 729,000 compensation claims that year. VA is also piloting another new approach to process certain compensation claims and appeals, but it has not yet established a plan to determine whether the pilot process is worthy of widespread implementation. In February 2009, VA launched a 2-year pilot called Expedited Claims Adjudication (ECA) in 4 regional offices. This pilot, a joint effort between VBA and the Board, is intended to accelerate the processing time of claims and appeals. Claimants who opt into the ECA pilot agree to respond to VA within time frames that are shorter than generally required. For example, participating claimants agree to submit any notice of disagreement with VBA’s decision within 60 days as opposed to within 365 days under VA’s normal requirements. In return, the expectation is that claimants will receive decisions from VBA—and from the Board if the claimant appeals the decision—more quickly. VA is collecting data on the timeliness of ECA processing compared with that of non-ECA processing, but complete data are not yet available. VA officials said they intend to evaluate ECA before expanding the expedited process within the agency. However, it is unclear when and how VA will conduct such an evaluation because it has not yet established an evaluation plan with specific criteria and methods to help assess ECA’s impact on non-ECA claims and appeals processing and on whether ECA is worthy of expansion. For example, it is unclear which timeliness metrics VA will use to help assess ECA, and the performance goals the new process must meet before being expanded. As required under the Veterans’ Benefits Improvement Act of 2008, VA is also piloting an expedited claims process for claimants who submit “fully developed claims” and affirm that they do not intend to submit additional information to support their claims. In return, VA’s goal is to process such claims within 90 days of receipt of the claim. VA is piloting this alternative process at 10 regional offices for at least 1 year, and the agency has hired a contractor to help assess the feasibility and advisability of continuing the pilot, and possibly deploying the process nationwide. Because certain types of claims—such as those from newer veterans— may naturally lend themselves to being fully developed and therefore may not be representative of all claims, the contractor will not merely compare the average processing times for fully developed claims with those of other claims. Instead, the contractor is working with VA to identify a sound and feasible methodology for evaluating this alternative claims process and is scheduled to provide VA with an evaluation of the pilot at the end of May 2010. VA has taken several additional steps that could improve the quality and timeliness of its decisions for compensation claims. For example, in July 2009, VA began piloting at one regional office a reorganization of its claims processors into groups that are collectively responsible for gathering the evidence for a claim, rating the claim, and processing the decision. This structure is different from the current organization, which has distinct teams for each phase of the claims process. This reorganization is based on a recent recommendation from a consulting firm that studied VA’s rating-related claim development process. In addition to reducing claim folder movement and thus potentially reducing the average processing time, the reorganization is intended to increase claims processing staff’s appreciation for how their work quality impacts other aspects of the process. Although some VA officials expressed skepticism that this reorganization would significantly improve the agency’s performance in processing compensation claims, they also acknowledged its potential benefits. According to VA officials, VA plans to evaluate the pilot in May 2010, but it has not yet established specific criteria for expanding the reorganization to other locations. Similar to the ECA pilot, VA has not yet specified which metrics it will use to help assess the pilot, and the goals that the new process must meet before being expanded. VA has also expanded its capacity to measure claims and appeals processing quality, which it uses to help monitor performance and identify training opportunities for staff. For example, in fiscal year 2008, VA doubled the number of staff working in VBA’s quality measurement group from about 10 to 20 staff to improve its ability to assess the accuracy of claim decisions and appellate work. In fiscal year 2008, this group more than doubled the number of claims it reviews for accuracy from 10 to 21 cases per month, per regional office. In addition, in fiscal year 2009, based in part on a VA inspector general recommendation, VBA began monitoring the accuracy of claims decided by rating resource centers as it does for regional offices. Moreover, starting in fiscal year 2008, based in part on our prior recommendation, VBA’s quality measurement group began conducting studies to monitor the extent to which veterans with a similar disability receive consistent ratings across regional offices. According to VA officials, VBA’s quality measurement group conducted four consistency studies in fiscal year 2008. VBA used these studies to identify training needs—such as how to verify a stressor for post-traumatic stress disorder—at specific regional offices. The group had planned to conduct additional consistency studies the following year, but because it doubled the number of case reviews and conducted ad hoc, focused reviews (e.g., of appellate work), it was not able to conduct further consistency studies. However, in fiscal year 2008, VBA’s quality measurement group began testing the consistency of decisions made by claims processing staff at different locations on a hypothetical claim. The group conducted two of these consistency tests during fiscal years 2008 and six tests in fiscal year 2009. VBA has used the results of these tests to help identify training needs related to rating certain disabilities, such as cardiovascular conditions. VA has also leveraged technology in recent years to improve claims processing. For example, VA has upgraded its claims processing software in phases to enhance its ability to track information about claims and reduce the need for duplicative data entry that could introduce errors. According to VA, a software upgrade in October 2007 improved staff’s ability to manage their workloads and more easily identify priority cases, such as those for veterans returning from the current conflicts in Iraq and Afghanistan, by electronically filtering and sorting pending claims. Other claims processing software upgrades have allowed VA to capture management information that is essential to conducting more robust analyses on claims processing performance. For example, the prior software system did not allow VA to electronically capture more than six conditions per claim. With its current claims processing software, VA captures information on the actual number of claimed conditions, which in turn allows VA to analyze claim development time by condition. Finally, VA has also begun processing certain compensation claims with less reliance on paper claim files, but widespread paperless processing remains elusive, in part because of technical challenges. As of October 2008, claims processing staff at two regional offices review scanned versions of all compensation claims filed by servicemembers 60 to 180 days before leaving the military, known as Benefits Delivery at Discharge claims. According to VA officials, this process is currently as efficient as paper-based processing, but may eventually be more efficient and enable further redistribution of case processing as regional offices’ changing capacities and workloads require. In addition, in the spring of 2009, VA designated one of its regional offices to test emerging technologies and processes in a real setting to gauge their potential impact on the agency and its employees. For example, VA recently used this office to test the impact of claims processing staff using only electronic information as opposed to hard-copy reference materials to process claims. VA hopes to further test paperless claims processing. However, officials said that the current system’s infrastructure cannot sustain the high volume of data needed to process paperless claims on a widespread basis. Even in processing Benefits Delivery at Discharge claims—which comprise a small fraction of total compensation claims—the system infrastructure used to process such claims occasionally malfunctions. Although VA has taken some steps to strengthen its claims processing system’s infrastructure, technical challenges persist, especially given the volume of evidence generally received for claims and the piecemeal, paper-based fashion in which VA often receives the information. These factors challenge VA as it works toward having a fully paperless claims processing system by the end of 2012. For years, VA’s disability claims and appeals processes have received considerable attention as VA has struggled to process an increasing number of claims from both veterans of recent conflicts as well as aging veterans from prior conflicts. Although VA workload and performance data indicate that VA has made progress in improving some aspects of its disability claims and appeals processing over the past decade, VA continues to wrestle with ongoing challenges that may not be resolved in the near future. Specifically, significant increases in claims workloads, complicated by more conditions per claim and human capital challenges associated with training and integrating VA’s large influx of new staff continue to contribute to lengthy processing times and a large pending claims inventory. VA has little or no control over some contributors to its increasing workload, but it has taken steps to address some internal inefficiencies and challenges that persist within its disability claims and appeals processes. Some of VA’s key actions, including its expansion of workload redistribution to resource centers and separate pilots aimed at reducing processing times, have the potential to improve the claims and appeals processes. However, without fully evaluating these actions, VA will not have the necessary information to determine their effectiveness and whether VA should continue to invest its limited resources in them. For example, workload redistribution to resource centers has the potential to improve services to veterans, but without tracking the timeliness and accuracy of the decisions processed by these centers, VA will not be able to fully monitor the centers’ performance and will lack key inputs for determining whether they yield positive returns on investment. As a result, VA could miss out on opportunities to either increase efficiencies by adding more resource centers, or to scale back workload redistribution if it is not having the desired effect. In addition, absent an evaluation plan or specific criteria for measuring the effect of its ECA and reorganization pilots, VA may not be able to determine whether they are successful or to make well-informed decisions about expanding them. Considering the challenges VA faces and will likely face in the future, it is important that VA make effective long-term decisions based on solid data to improve benefit delivery for veterans. We recommend that the Secretary of Veterans Affairs direct: 1. VBA to collect data on redistributed claims for development, rating, and appellate work to help assess the timeliness and accuracy of resource centers’ output and the effectiveness of workload redistribution. 2. VBA and the Board to establish an evaluation plan for assessing the ECA pilot process and guiding any expansion decisions. Such a plan should include criteria for determining how much improvement should be achieved under the pilot on specific performance measures—such as average VBA and Board processing times—and include methods for how VBA and the Board will consider ECA’s impact on non-ECA claims and appeals processing before implementing the process widely. 3. VBA to establish a plan to evaluate its claims processing reorganization pilot and guide any expansion decisions. Such a plan should include criteria for determining how much improvement should be achieved in the pilot on specific performance measures—such as decision timeliness and accuracy—before the process is implemented throughout VBA. We provided a draft of this report to VA for review and comment. VA generally agreed with our conclusions and concurred with our recommendations. Its written comments are reproduced in appendix II. VA agreed with our recommendation that VBA collect data on redistributed claims and appellate work to help assess the timeliness and accuracy of resource centers’ output and the effectiveness of workload redistribution. VA stated that, by March 2010, VBA plans to change a primary workload management tool to help collect timeliness data of redistributed work. Analyzing such timeliness data along with other factors, such as quality and cost, will be helpful in evaluating the effectiveness of workload redistribution. VA also agreed with our recommendation that VBA and the Board establish an evaluation plan for assessing the ECA pilot process, and stated that the Board will work with VBA to establish evaluation criteria and explore the potential impact of ECA on non-ECA claims and appeals processing. VA stated that the Board hopes to complete an evaluation of ECA and make recommendations regarding potentially expanding the pilot process or permanently incorporating successful aspects of it by the end of fiscal year 2010. We applaud VA’s intent to evaluate the pilot and encourage VA to take steps to ensure that the evaluation design and criteria yield valid information for making decisions regarding expansion. Finally, VA agreed with our recommendation that VBA establish a plan to evaluate its claims processing reorganization pilot, and provided critical factors that VBA and a private consulting firm established to help assess and report on the pilot. Identifying these factors is an important start; however, we believe that VBA should also establish the minimum levels of performance improvement by factor that should be achieved before the pilot process is considered successful and worthy of expansion. We are sending copies of this report to the relevant congressional committees, the Secretary of Veterans Affairs, and other interested parties. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members who made key contributions to this report are listed in appendix III. The objectives of our review were to examine (1) trends in the Department of Veterans Affairs’ (VA) disability compensation claims processing at the claims and appellate levels and (2) actions that VA has taken to improve its disability claims process. For both objectives, we focused our analysis on VA’s processing of disability compensation for veterans as opposed to other types of benefits, such as pensions. To examine workload and performance trends, we analyzed compensation claims processing data from VA’s Veterans Benefits Administration (VBA) and Board of Veterans’ Appeals (Board). In addition, we interviewed VA officials familiar with the claims process and reviewed VA annual performance reports and other documents to understand data trends and related VA challenges and to corroborate our findings. Further information about our analysis of VA workload and performance data is provided in the following text. To identify actions that VA has taken to improve its disability compensation claims and appeals processing, we reviewed relevant VA testimony and key documents, such as VA strategic plans, and interviewed VA officials responsible for compensation claims and appeals processing. We focused on VA actions that are ongoing or those that VA completed after fiscal year 2005. To examine these actions, we analyzed VBA and Board staffing data; reviewed VA’s budget submissions, internal processing guidance, and other documents such as external studies and VA’s Office of Inspector General reports; and interviewed VA officials and veteran service organization representatives. In addition, we visited four VBA regional offices and the Board to learn about ongoing initiatives. In selecting the regional offices—Chicago, Illinois; Seattle, Washington; Togus, Maine; and Winston-Salem, North Carolina—we considered regional offices that would provide (1) insights about ongoing initiatives, such as pilots; (2) a mix of offices located in different geographic settings (e.g., urban and rural); and (3) a mix of offices that are above and below VBA’s averages for select claims processing measures. We also reviewed relevant federal laws, regulations, and court decisions. We conducted this review from November 2008 to January 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To analyze VA disability compensation claim workloads and processing timeliness, we obtained nationwide, summary-level workload and performance data by fiscal year from VA’s Benefits Delivery Network and accompanying Distribution of Operational Resources (known as “DOOR”) reports, and Veterans Services Network (VETSNET) system and accompanying VETSNET Operations Reports (known as “VOR”), and spoke with VA officials about these data and sources. We limited our analysis to the following three types of disability claims: (1) initial compensation claims with fewer than or equal to seven disabilities, (2) initial compensation claims with eight or more disabilities, and (3) reopened compensation claims. We analyzed data for fiscal years 2000 to 2008. To analyze pending claims trends, we considered the number of claims that were awaiting a decision on the last day of each fiscal year. To analyze and report other claim processing trends besides those for receipts—which are designated by fiscal year according to when VA received the claim—VA designated claims by fiscal year according to when the decisions occurred. To verify the reliability of summary-level workload and timeliness data from these systems, we obtained and analyzed record- level data from VA and spoke with VA officials about how the data are input. We were able to replicate all of the summary-level workload and timeliness data that VA provided. However, we questioned VA’s method for calculating claim receipts. Therefore, we attempted to replicate claim receipt data using VA’s method and our method. VA calculates monthly claim receipts by counting the total number of pending claim records at the end of a month; subtracting the number of pending claim records from the end of the previous month; and adding the number of completed claims during the month, regardless of when they originated. To calculate the annual number of claims received, VA then adds the monthly claim receipt counts. Our method for calculating annual claim receipts was to count the number of claims whose claim date was in a given year. We compared the results and found that the annual claim receipts data using our method were about 2 to 3 percent lower than the data replicated using VA’s method. Ultimately, we decided to use the summary-level receipts data that VA provided because they were materially close to our counts and because we were able to replicate VA’s summary-level data using its method. To analyze the quality of VA’s disability compensation claims processing, we obtained annual, nationwide data from its Systematic Technical Accuracy Review (STAR) program and verified the reliability of the data. The STAR program audits a randomly selected sample of VBA’s completed claims for accuracy. We limited our analysis to the following three types of disability claims: (1) initial compensation claims with fewer than or equal to seven disabilities, (2) initial compensation claims with eight or more disabilities, and (3) reopened compensation claims. To report consistent data, we analyzed fiscal years 2003 to 2008 because the STAR program changed its audit methodology in fiscal year 2002. To verify the reliability of STAR data, we spoke with VA officials responsible for overseeing the STAR system. We also relied on prior verification of STAR data. Consistent with this prior verification, we found that the STAR data were reliable for reporting nationwide trends. To analyze VA’s disability compensation appellate workloads and processing performance, we obtained record-level appeals data extracted on April 2, 2009, from the Veterans Appeals Control and Locator System (VACOLS). We limited our analysis to rating-related disability compensation appeals, which we identified by speaking with Board officials about how rating-related disability compensation appeals are classified in VACOLS, then limiting the data accordingly. We further limited our analysis to original appeals as opposed to appeals that, for example, had been previously remanded by the Board. Using the record- level appeals data, we generated nationwide annual data for fiscal years 2000 to 2008. To analyze pending appeals trends, we considered the number of appeals that were awaiting a decision on the last day of each fiscal year. To analyze other appeals processing trends besides those for receipts—which we designated by fiscal year according to when VA received the appeal—we designated appeals by fiscal year according to when their resolution occurred. Our reporting of avoidable remands—which are appeals that the Board does not consider because of claims processing errors that occurred before VBA certified transferring the appeal to the Board—varies from calculations we received from VBA. For fiscal years 2006 to 2008, we calculated avoidable remand rates of 24.3 percent, 25.4 percent, and 24.7 percent, respectively; whereas VBA reported avoidable remand rates of 23.7 percent, 17.9 percent, and 17.7 percent, respectively. Our analysis was limited to compensation appeals, whereas VBA included noncompensation-related appeals. In addition, the calculation methods differed. We calculated the avoidable remand rate as the number of avoidable remands on original appeals—which excludes appeals that were previously remanded by the Board—divided by the number of original appeals decided by the Board. VBA calculated the rate as the number of avoidable remands on original appeals, divided by the total number of appeals decided by the Board. We believe that VBA’s method is misleading because appeals in the denominator are not restricted as they are in the numerator. To assess the reliability of record-level appeals data, we (1) interviewed Board officials about program and technical operations and (2) performed electronic testing to identify missing and potentially invalid data and to identify internal inconsistencies. We found that the data were reliable for our reporting purposes. Shelia Drake, Assistant Director; Joel Green; Lisa McMillen; and Bryan Rogowski made significant contributions to this report. In addition, Walter Vance provided guidance on research methodology; Cynthia Grant and Christine San provided assistance with data analysis; Roger Thomas provided legal counsel; Jessica Orr helped with report preparation; and James Bennett provided assistance with graphics. Veterans’ Disability Benefits: Preliminary Findings on Claims Processing Trends and Improvement Efforts. GAO-09-910T. Washington, D.C.: July 29, 2009. Military Disability System: Increased Supports for Servicemembers and Better Pilot Planning Could Improve the Disability Evaluation Process. GAO-08-1137. Washington, D.C.: September 24, 2008. Veterans’ Benefits: Increased Focus on Evaluation and Accountability Would Enhance Training and Performance Management for Claims Processors. GAO-08-561. Washington, D.C.: May 27, 2008. Veterans Benefits Administration: Progress Made in Long-Term Effort to Replace Benefits Payment System, but Challenges Persist. GAO-07-614. Washington, D.C.: April 27, 2007. Veterans’ Disability Benefits: VA Can Improve Its Procedures for Obtaining Military Service Records. GAO-07-98. Washington, D.C.: December 12, 2006. Veterans’ Benefits: Further Changes in VBA’s Field Office Structure Could Help Improve Disability Claims Processing. GAO-06-149. Washington, D.C.: December 9, 2005. Veterans’ Disability Benefits: Claims Processing Challenges and Opportunities for Improvements. GAO-06-283T. Washington, D.C.: December 7, 2005. VA Disability Benefits: Board of Veterans’ Appeals Has Made Improvements in Quality Assurance, but Challenges Remain for VA in Assuring Consistency. GAO-05-655T. Washington, D.C.: May 5, 2005. Veterans’ Benefits: More Transparency Needed to Improve Oversight of VBA’s Compensation and Pension Staffing Levels. GAO-05-47. Washington, D.C.: November 15, 2004. Veterans’ Benefits: Improvements Needed in the Reporting and Use of Data on the Accuracy of Disability Claims Decisions. GAO-03-1045. Washington, D.C.: September 30, 2003. High-Risk Series: An Update. GAO-03-119. Washington, D.C.: January 2003. Veterans’ Benefits: Claims Processing Timeliness Performance Measures Could Be Improved. GAO-03-282. Washington, D.C.: December 19, 2002. Veterans’ Benefits: Quality Assurance for Disability Claims and Appeals Processing Can Be Further Improved. GAO-02-806. Washington, D.C.: August 16, 2002. Veterans’ Benefits: VBA’s Efforts to Implement the Veterans Claims Assistance Act Need Further Monitoring. GAO-02-412. Washington, D.C.: July 1, 2002. Veterans Benefits Administration: Problems and Challenges Facing Disability Claims Processing. GAO/T-HEHS/AIMD-00-146. Washington, D.C.: May 18, 2000.
For years, the disability compensation claims process has been the subject of concern and attention by the Department of Veterans Affairs (VA), Congress, and veteran service organizations (VSO), due in part to long waits for decisions and the large number of claims pending a decision. As GAO and other organizations have reported over the last decade, VA has also faced challenges in improving the accuracy and consistency of disability decisions. GAO was asked to examine (1) trends in VA's disability compensation claims processing at the initial claims and appeals levels and (2) actions that VA has taken to improve its disability claims process. To do this, GAO reviewed and analyzed VA performance data, budget submissions, program documents, and external studies and interviewed VA officials and VSO representatives. VA's disability claims and appeals processing has improved in some aspects and worsened in others. In recent years, the number of claims completed annually by VA has increased but not by enough to keep pace with the increasing number of compensation claims received, resulting in more claims awaiting a decision. In addition, the average days that VA took to complete a claim--196 days in fiscal year 2008--has varied over time, but was about the same in fiscal years 2000 and 2008. Several factors have challenged claims processing improvements, such as the increase in the number and complexity of claims submitted to VA, laws, and regulatory changes. VA has reduced the number of pending appeals and improved the accuracy of some appellate work, but the time that it takes to resolve appeals has worsened in recent years. For example, in fiscal year 2008, VA took on average 776 days to process appeals; 78 days longer than in fiscal year 2004. One factor that has contributed to worsening appeals timeliness is the increase in the number of appeals received by VA. VA has taken several steps to improve claims and appeals processing, but their impact is not yet known. VA has hired a significant number of disability claims staff to process disability workloads. VA's Veterans Benefits Administration (VBA) has also expanded its practice of workload redistribution, which could improve the timeliness and quality of its decisions. VA is also testing new claims processing approaches--such as shortening response periods for certain claims and appeals through Expedited Claims Adjudication (ECA) and reorganizing its claims processing units. However, VBA has not established plans to evaluate the effect of some initiatives. In addition, VA has taken other steps to improve claims and appeals processing, such as expanding its quality assurance program; upgrading claims processing software; and moving toward paperless processing, which remains elusive in part due to technical challenges.
To address Medicare’s vulnerability to fraud, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) established the Medicare Integrity Program (MIP). In particular, HIPAA required the Secretary of HHS to enter into contracts to promote the integrity of the Medicare program. In exercising its authority to identify and combat improper payments, CMS created 18 Program Safeguard Contractors (PSC) to identify and investigate potential fraud in specific parts of Medicare, such as Part A, in particular states or regions. In 2008, as part of the implementation of broader agency contracting reform, CMS began replacing PSCs with ZPICs, reducing the total number of contractors and giving additional responsibilities to ZPICs to investigate potential fraud across the Medicare fee-for-service program. In September 2008, CMS awarded the first two ZPIC contracts for Zones 4 and 7. As of September 2013, all but one of the ZPICs— Zone 6—was in operation. PSCs continue to operate in Zone 6 because of protest-related delays with respect to the Zone 6 ZPIC contract. The term of each ZPIC contract is generally for a 1-year base period followed by 4 option years, enabling CMS to extend each contract through 5 years of performance. (See table 1 for contract performance timelines and fig. 1 for a map of the seven ZPIC zones.) In 2010, CMS established the Center for Program Integrity (CPI), which oversees the agency’s program integrity efforts, including ZPICs. CPI’s stated mission is to ensure that correct payments are made to legitimate providers for covered, appropriate, and reasonable services for eligible beneficiaries. CPI has undertaken an effort to try to move beyond the “pay and chase” approach—which focused on the recovery of funds lost due to payments of fraudulent claims—to focusing on fraud prevention. To enhance these efforts, the Small Business Jobs Act of 2010 appropriated funds for and required CMS to implement predictive analytics technologies, which are automated systems and tools that can help identify patterns of potentially fraudulent claims before they are paid. In turn, CMS developed the Fraud Prevention System (FPS), an electronic system in which Medicare claims data are compared against models of potentially fraudulent behavior to identify and prioritize for investigation providers with aberrant billing patterns. As part of implementing FPS, CPI modified ZPICs’ work. They are to continue to investigate and quickly initiate actions to protect Medicare, but are also charged with investigating certain referrals from FPS. To detect and investigate potential fraud within each zone, ZPICs develop leads, investigate them, and initiate appropriate actions against suspect providers, suppliers, and others. ZPICs do this with teams of investigators, data analysts, and medical reviewers. Investigators perform a range of actions to examine potential fraud, including conducting provider audits, making site visits to suspect providers’ offices, and interviewing Medicare beneficiaries. Data analysts, including statisticians, examine Medicare claims and other data to support investigations and search for potential fraud and new schemes. Medical reviewers, primarily nurses, provide clinical knowledge to support the work of investigators and data analysts. ZPICs identify potential targets for fraud investigations using three categories of sources: 1. Reactive sources. Reactive sources are notifications of potential fraud submitted to ZPICs, which may result in a ZPIC conducting an investigation. A number of entities refer potential fraud to ZPICs for investigation. These entities include Medicare Administrative Contractors (MAC), which examine their contacts with beneficiaries for indications of potential fraud and may forward the contacts to ZPICs for additional scrutiny. In addition, HHS OIG operates a fraud hotline and may refer calls from it to the MACs for initial screening and then to the ZPICs for further investigation. Other sources include investigations ZPICs receive directly from CMS. 2. Proactive sources. ZPICs are required to maintain at least 3 years of Medicare claims data for analysts to examine for potential fraud using a variety of analytic tools and methods. For example, analysts examining these data may identify providers that, compared with their peers, have aberrant billing patterns, which can indicate potentially fraudulent behavior. If analysts identify such patterns, those findings may result in a ZPIC investigation. 3. FPS. FPS identifies providers for ZPICs to investigate, with the goal of identifying aberrant billing patterns early so that ZPICs can investigate suspect providers before they generate large amounts of potentially fraudulent claims. ZPICs prioritize their investigations according to CMS guidance, which states that ZPICs should give priority to investigations with the greatest program impact and/or urgency. CMS’s Program Integrity Manual defines such investigations as those involving patient abuse or harm, multistate fraud, high dollar amounts of potential overpayments, likely increase in the amount of fraud or enlarged pattern of fraud, and complaints made by Medicare supplemental insurers. In addition, with the implementation of FPS in July 2011, CMS directed the ZPICs to investigate certain high-risk leads from that system. As part of their investigations, ZPICs initiate administrative actions against Medicare providers or suppliers, coordinating with CMS and MACs to carry out those actions, which may result in Medicare savings. (See table 2 for the administrative actions ZPICs may initiate as part of their investigations.) For example, ZPICs may initiate payment suspensions that allow CMS to stop payment on suspect claims and prevent the payment of future claims until an investigation is resolved. In addition, a ZPIC may recommend to CMS that the agency revoke a provider’s Medicare billing privileges and will coordinate with a MAC to implement that action following CMS approval. In addition to administrative actions, ZPICs may forward vulnerabilities identified during an investigation to CMS for consideration as possible local or national prepayment edits. In addition to administrative actions, if a ZPIC investigation uncovers suspected instances of fraud, the ZPIC must refer the investigation to HHS OIG for further examination and, if HHS OIG declines to investigate, the ZPIC may refer the issue to the FBI or any other interested law enforcement entity, such as a U.S. Attorney’s Office. A ZPIC investigation that is referred to and accepted by law enforcement for further exploration and potential prosecution is then called a case. As long as law enforcement entities have not closed a case, it is considered open by both law enforcement and ZPICs. CMS also requires ZPICs to support HHS OIG, the Department of Justice (DOJ), and other law enforcement entities with their Medicare fraud investigations. This support can be for these entities’ own, independently initiated cases, or for those that ZPICs initiated and then referred to law enforcement. ZPICs provide support on ZPIC-initiated and non-ZPIC- initiated cases by responding to law enforcement requests for information. These requests may be for data analysis; provider enrollment records, which ZPICs obtain from MACs; medical review; or other investigative support. ZPIC contracts cover three areas of work: 1. Fee-for-service program integrity work. ZPICs are to identify and investigate potential fraud in Medicare fee-for-service. The contracts for this work outline four categories of investigations: Part A, Part B, durable medical equipment (DME), and home health and hospice. Although DME providers and home health and hospice suppliers provide services covered under Medicare Parts A and B, the ZPIC contracts identify them separately and ZPICs track their fee-for- service program integrity work based on these four categories. 2. Medicare-Medicaid Data Match Program (Medi-Medi). Medi-Medi is a joint effort between CMS and states to identify providers with aberrant Medicare and Medicaid billing patterns through analyses of claims for individuals with both Medicare and Medicaid coverage. States participate voluntarily and ZPIC Medi-Medi work and funding is dependent on the number of states, if any, actively participating in each zone. 3. Special projects. CMS may also fund ZPICs for special zone- or fraud- specific projects. Special projects can vary in duration and can be as short as several months or run for multiple years. The total award amount for the six operating ZPIC contracts through all option years is more than $600 million. Of that amount, $411 million is for fee-for-service program integrity work, $169 million is for Medi-Medi, and $62 million is for special zone- and fraud-specific projects over the life of the contracts. (See fig. 2.) The contract award amounts for the six operating ZPIC contracts (inclusive of option years) range from $67 million to $182 million, which reflects variations between the zones in terms of their size, exposure to fraud risk, and receipt of special projects. For example, Zone 7 covers a geographically small area comprising one state and one territory, but is an area CMS considers to be at high risk for fraud. In comparison, Zone 2 is a geographically large but predominantly rural area comprising 14 states and including areas that may be at a lower risk of fraud. In addition, although not all ZPICs currently receive funding for a special project, all six operating ZPICs have at some time received such funding. For example, one ZPIC was awarded almost $50 million for an ongoing state-specific fraud hotline and another received almost $3 million for a completed project specifically examining potential fraud among home health providers. CMS primarily oversees ZPICs through the coordinated efforts of CPI and the Office of Acquisition and Grants Management (OAGM). The ZPIC Contracting Officer in OAGM is responsible for ensuring effective contracting, and the Contracting Officer’s Representatives (COR) are in CPI. Each ZPIC is assigned a different COR, who helps oversee ZPIC contractor compliance through ongoing reviews. Among other things, the CORs use CMS ARTS to review their ZPICs’ monthly invoices and aggregate workload, such as the total number of new investigations, administrative actions, and dollar amounts recouped in a month. Each ZPIC contract includes award fee provisions, which give contractors the opportunity to earn all or some of the award fee allowed under their contracts, depending on their level of performance. CMS evaluates each ZPIC’s performance annually and determines how much of its award fees it will receive. CMS first evaluates whether a ZPIC is eligible for an award fee. For these reviews, CMS instructs its CORs on how to assess specific areas of their ZPICs’ performance by interviewing ZPIC and other staff; reviewing a sample of open and closed investigations and cases, as well as other documents; reviewing data in CMS ARTS, FID, and other systems; and making observations during ZPIC site visits. If in this review CMS finds that a ZPIC meets certain performance thresholds, the CORs move to the second step: using their annual review findings to recommend the amount of award fees a ZPIC should receive. The ZPICs’ contracts specify through Award Fee Plans the criteria against which CMS will measure ZPICs’ performance to earn their fees. These criteria fall into two overarching areas: (1) quality of service measures that apply to all ZPICs, worth 60 percent of the award fee, and (2) ZPIC-specific plans drafted in the prior year by each ZPIC and approved by CMS on how the ZPIC will improve its administrative actions—Award Fee Administrative Action Plans—worth 40 percent. ZPICs can receive all or part of their proposed award fees based on how well they perform in each of the elements within the two areas. CMS paid the six operating ZPICs about $108 million in calendar year 2012, including about $1.3 million in award fees for each ZPIC’s most recent contract year evaluation. CMS’s payments were primarily to reimburse contractors for fee-for-service work, comprising $77 million of the $108 million paid. ZPICs reported spending most of their fee-for-service funding in 2012 on fraud case development, primarily for investigative staff. (See fig. 3 for the breakdown of ZPIC fee-for-service spending.) According to CMS officials, fraud case development costs are those related to identifying and investigating potential Medicare fraud. These costs include those associated with developing proactive sources, and addressing potential fraud identified by FPS. Personnel accounts for most of these costs, with ZPICs reporting that half their fraud case development staff are investigators and the other half are split between medical reviewers and data analysts. ZPIC officials told us that identifying and investigating potential Medicare fraud can be labor intensive, which is why the largest direct cost was for personnel. In 2012, ZPICs reported that their investigations included 3,600 beneficiary interviews, 777 onsite inspections, prepayment reviews of 190,000 suspended claims and postpayment reviews of 32,000 paid claims. Additionally, ZPICs added more than 1,100 providers to prepayment review and almost 300 providers to postpayment review. In calendar year 2012, ZPICs reported more than $250 million in savings to Medicare by stopping payment on suspect claims and recouping money from overpayments. However, it is unclear if ZPICs could save more money by taking swifter actions since CMS lacks information on the speed of those actions. ZPICs took these actions based primarily on reactive sources, such as tips and complaints. Example of investigations resulting in a revocation: One ZPIC described that investigations involving “false fronts”—meaning there is no provider at the designated address—allow the ZPIC to quickly initiate revocations of those providers’ billing privileges. ZPICs reported initiating administrative actions that led to more than $250 million in savings or money recovered to Medicare in calendar year 2012 (see table 3). These savings represent nearly $100 million in claims flagged for review and then denied before payment; almost $100 million in auto-denial edits for suspect providers, suppliers, and beneficiaries; and almost $60 million recouped by MACs at the request of ZPICs. In addition, ZPICs placed more than $14 million in suspense accounts while the claims for that money were reviewed. ZPICs also reported taking actions that could result in savings that may not be easily quantifiable. For example, in 2012 ZPICs reported implementing more than 160 revocations and deactivations. Although these actions represent no direct CMS has reported that revocations are the most effective fraud savings,prevention tool because they prevent providers from submitting additional potentially fraudulent claims. (See app. II for more information on ZPICs’ actions, including by provider type.) ZPICs coordinate with law enforcement entities on ZPIC-initiated and other investigations, resulting in additional savings to Medicare and other results. In 2012, ZPICs reported that law enforcement entities accepted more than 130 new cases from them, with HHS OIG as the primary entity accepting the cases, followed by the FBI. In addition, ZPICs reported completing almost 1,800 requests for information for cases initiated by law enforcement and almost 700 for cases that had been initiated by ZPICs, primarily for data analysis. ZPICs also reported that, as a result of their cases being accepted and prosecuted by law enforcement, convicted providers were ordered to pay almost $80 million in court- determined fines, settlements, and/or restitutions. Cases can also result in prison sentences and other actions, though CMS does not consistently track those outcomes. ZPICs are to track information on the results of their cases in FID, but the system contains few outcomes. CMS officials said that they are aware of this issue and have taken steps to both improve ZPICs’ use of FID and integrate the system with CMS ARTS and other systems to improve the data in FID. As of August 2013, CMS officials reported that the agency was testing the integration of the systems and expected the integration to be completed by late 2013. According to CMS, ZPICs are to take immediate action to protect Medicare funds, but CMS may be missing opportunities for additional savings to Medicare because the agency lacks information on the timeliness of certain ZPIC actions. ZPIC officials reported taking actions and preventing potentially fraudulent payments before they were made, in line with CMS fraud strategies, and CMS ARTS data show ZPICs implementing some aspects of these strategies. For example, ZPIC officials reported focusing on prepayment reviews of claims—preventing potentially fraudulent payments—and 2012 CMS ARTS data showed that, of the providers whom ZPICs reviewed in 2012, almost five times as many had their claims reviewed on a prepayment basis rather than a postpayment basis. However, CMS does not track information on the swiftness of these actions, such as the length of time between a ZPIC’s receipt of a complaint about a suspect provider and the ZPIC’s visit to that provider, or between identifying a potentially fraudulent provider and initiating an administrative action. Federal internal control standards state that agencies’ management should have information on performance relative to established objectives so that actual performance can be continually compared against goals and differences can be analyzed. Because CMS does not have information on ZPICs’ timeliness for these types of activities, the agency cannot benchmark any changes in timeliness or measure the effectiveness of its strategies, such as whether ZPICs are limiting unnecessary losses to Medicare from suspect providers continuing to receive potentially fraudulent Medicare payments while awaiting investigative or administrative actions. Reactive sources—primarily complaints—were the major source of new ZPIC investigations in 2012, accounting for almost 90 percent of the almost 5,000 new investigations that year. (See fig. 4 for the sources of ZPIC investigations.) In 2012, ZPICs received almost 5,000 complaints, 45 percent of which were from MACs and over 50 percent from other sources, primarily the HHS OIG hotline, as reported by ZPIC officials. Proactive projects and FPS each accounted for less than 10 percent of investigations. Examples of proactive projects include analyzing data to identify spikes—large, rapid increases—in providers’ billing patterns; aberrant providers, such as those with unusual billing patterns; and schemes related to stolen beneficiary identities. ZPIC officials reported that their proactive data analysis projects are valuable because they find zone-specific fraud or new fraud schemes that reactive sources or FPS may not identify. For example, one ZPIC that covers multiple frontier states conducted a proactive project related to critical access hospitals, 40 percent of which are in that ZPIC’s geographic zone. ZPIC officials reported that as a result of this project, they identified overpayments to several hospitals that had opened new psychiatric units, as well as opportunities for education to improve patient care. (See app. II for more information on the sources of ZPIC investigations.) Although ZPIC officials previously reported issues with the quality of leads from FPS, as well as a decline in the number of proactive projects as a result of increased work to address FPS leads, officials have since reported improvements in FPS and their ability to address leads from the system. For example, officials from one ZPIC reported that the leads from FPS have improved and that the zone developed a new process for investigating those leads, thereby improving results. CPI officials reported that they will continue to direct ZPICs to investigate leads from proactive and reactive sources, as well as FPS, noting that the most successful ZPICs are those that can effectively address leads from all three categories. Based on CMS’s annual reviews, five of the six operating ZPICs were eligible for some portion of their contracts’ available award fees, and ZPICs received almost 70 percent of all fees for their most recent periods of performance.the annual reviews—elements that measure aspects of quality of service, cost control, business relations, and timeliness of certain activities— ranged from satisfactory to exceptional, meeting the award fee eligibility requirement of at least a satisfactory rating in all four of these elements. CMS awarded the five eligible ZPICs about two-thirds of the available award fees—$1.3 million out of $1.9 million—in the ZPICs’ most recent contract years based on ZPIC performance both on quality-of-service measures in the annual reviews and achievement of their Award Fee Administrative Action Plan goals. CMS officials reported that they assigned the majority of available award fee amounts—60 percent—to the quality-of-service measures in the annual evaluations because quality is the most important element of ZPICs’ work. CMS apportions the 60 percent of award fee amounts for quality of service across multiple The five ZPICs’ ratings for the elements considered in assessment criteria. Table 4 lists the quality-of-service measures for which ZPICs could earn award fees. Among the highest-value elements are how well ZPICs prioritize and document investigations, conduct medical reviews, and analyze data. ZPICs’ Award Fee Administrative Action Plan goals varied by ZPIC, and included goals such as developing a project to identify and prevent phantom provider schemes and improving the timeliness of initiating and implementing payment suspensions. CMS officials said that this portion of the award fee is intended to encourage ZPICs to develop more innovative ways to take administrative actions. CMS follows some best practices for its oversight of ZPICs, but does not clearly link ZPIC performance to agency performance measures and goals. The award fee evaluations allow CMS to assess key elements of ZPICs’ work, which follows federal best practices. Federal standards state that performance measures may address the type or level of program activities conducted (process), the direct products and services delivered by a program (outputs), or the results of those products and services (outcomes). CMS’s measures evaluate ZPICs’ processes and outputs, but not their outcomes. Moreover, these performance measures do not connect ZPIC work to agency performance measures that are linked to its goals, which is another best practice. One way that agencies examine the effectiveness of their programs is by measuring performance as required by the Government Performance and Results Act of 1993 (GPRA), as amended by the GPRA Modernization Act of 2010. One of CMS’s GPRA goals is to fight fraud and work to eliminate improper payments. Within that goal are two Medicare fee-for-service performance measures for determining progress toward that goal, and CMS officials reported that ZPICs are the primary actors for one of the measures: increasing the percentage of providers who are identified as high risk against whom CMS takes administrative actions. CMS’s fiscal year 2014 target for this performance measure is to increase the percentage of administrative actions taken for these high-risk providers from 27 percent to 36 percent. Federal standards state that entities should link performance measurements to goals and objectives, and previous GAO work found that leading organizations try to link the goals and performance measures for each organizational level to successive levels and ultimately to the organization’s strategic goals.none of the ZPICs’ performance measures link to the agency’s measures of increasing the percentage of administrative actions taken against high- risk providers, or to the other Medicare fee-for-service program integrity performance measure of reducing improper payments. Some ZPICs had goals in their Award Fee Administrative Action Plans related to the agency’s performance measures—for example, one ZPIC set a goal of increasing the value of its referrals of overpayments, which could reduce improper payments—but these were zone-specific and do not allow CMS to evaluate the overall impact of ZPICs on agency measures and, ultimately, goals. CMS officials told us in April 2013 that they are revising the ZPIC Award Fee Plans, but based on a draft of the revisions and discussions with CMS officials, the revised plans will continue to lack measures related to outcomes and will not tie performance to agency program integrity measures or goals. difficult and setting targets can be problematic, CMS could explicitly link ZPICs’ work to the agency’s progress toward meeting its performance measures and goals. Specifically, CMS officials reported that they are using FPS to identify and track high-risk providers for the performance measure of increasing the number of administrative actions taken against those providers. Although ZPICs are the primary users of FPS and have primary responsibility for initiating administrative actions, CMS does not link ZPICs’ use of FPS to that measure, hindering the agency’s ability to effectively oversee its progress toward meeting its goal of fighting fraud and working to eliminate improper payments. Given the vulnerability of the Medicare program to fraud and the lack of reliable estimates of the extent of fraud in the program, determining how well CMS is carrying out its fraud prevention strategy is a vital, if challenging, task. ZPICs, which are central to that strategy, reported that their efforts have yielded positive results, such as savings greater than their contract costs and multiple other actions that helped protect Medicare from potentially fraudulent providers, such as referring suspect providers to law enforcement. Yet little is known about how expeditiously ZPICs take action to save Medicare funds—an important consideration given that the longer a fraud scheme operates, the greater the potential financial losses. As a result, CMS would benefit from enhancing its collection and evaluation of information on the timeliness of ZPICs’ actions, including information on whether new tools or strategies have increased the speed with which ZPICs investigate potentially fraudulent providers or initiate administrative actions. In addition, as CMS attempts to achieve its agencywide program integrity goal of fighting fraud and eliminating improper payments in the Medicare program, it would benefit from knowing how ZPICs are contributing to efforts to achieve this goal. By linking the evaluation of ZPICs’ work to the agency’s program integrity performance measures—in particular the performance measure focused on administrative actions, which are a significant portion of ZPICs’ work— CMS would have greater assurance that its ZPIC activities are appropriately supporting CMS fraud prevention efforts. To help ensure that CMS’s fraud prevention activities are effective and that CMS is comprehensively assessing ZPIC performance, the Administrator of CMS should take the following two actions: Collect and evaluate information on the timeliness of ZPICs’ investigative and administrative actions, such as how soon investigations are initiated after ZPICs identify potential fraud and how swiftly ZPICs initiate administrative actions after identifying potentially fraudulent providers. Develop ZPIC performance measures that explicitly link their work to the agency’s Medicare fee-for-service program integrity performance measures and targets for its GPRA goal of fighting fraud and working to eliminate improper payments. We requested comments from HHS, but none were provided. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Acting Administrator of CMS, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To determine Zone Program Integrity Contractors’ (ZPIC) contract costs and how ZPICs use those funds, we examined data from CMS’s Analysis, Reporting, and Tracking System (ARTS), an online system ZPICs use to submit invoices and report workload statistics and which CMS uses to track and analyze ZPIC workload, performance, and production. Specifically, we examined aggregated ZPIC invoices and workload statistics that specified how ZPICs allocate their funds, and interviewed ZPIC officials to confirm these data. We also reviewed the task orders outlining the scope of each zone’s work and obtained data from CMS on ZPIC contract amounts. To describe the results of ZPIC Medicare fee-for-service investigations, we examined data from CMS ARTS and the Fraud Investigation Database (FID), a secure system that contains details related to Medicare fraud and abuse investigations. We analyzed calendar year 2012 data for the six operating ZPICs on the sources of their investigations, the numbers of administrative actions taken, and dollar values of relevant actions. We reviewed CMS guidance on how ZPICs should prioritize their work and how to conduct investigations. We interviewed officials from the CMS Center for Program Integrity about how they review and track ZPIC administrative actions and their process of approval for actions, such as revocations. We interviewed officials from all six ZPICs to learn about their internal guidance on prioritizing and conducting their work, how they determine when to take administrative actions, and how they decide to refer a case to law enforcement. To examine the results of CMS’s evaluation of ZPICs’ performance and aspects of CMS’s evaluation practices, we reviewed the following: each ZPIC’s most recent Contractor Performance Assessment Report; each ZPIC’s most recently completed Award Fee Administrative Action Plan, which describes the ZPIC’s plans to improve administrative actions and how it will earn its award fee; and data from CMS on the percentage and amount of each zone’s award fee. We reviewed internal CMS guidance on how to evaluate ZPICs’ performance, as well as federal standards and best practices for measuring performance. We also interviewed CMS contracting and other officials to learn about the review process and how such guidance is applied, and to discuss changes to ZPIC evaluations and performance measures. We also interviewed ZPIC officials to learn more about how ZPICs determine their Award Fee Administrative Action Plan goals and how they evaluate themselves on these goals and other work. We assessed the reliability of the data we obtained from CMS ARTS and FID through interviews with agency officials and users, system demonstrations, and, in the case of CMS ARTS, direct use of the system. We shared with CMS and the relevant ZPIC any errors we identified through reviews of the data and comparisons with other sources to obtain corrected information. We found the data sufficiently reliable for the purposes of this review. We conducted this performance audit from October 2012 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following table shows selected ZPIC activities and results reported by ZPICs in CMS ARTS. Durable medical equipment (DME) is covered under Medicare Part B, and home health and hospice under are covered under Part A, but ZPICs report data in CMS ARTS as monthly aggregates by Part A, Part B, DME, and home health and hospice. CMS ARTS data do not allow us to identify particular provider types, such as whether a Part B provider was a family physician or podiatrist. In addition to the contact named above, Karen Doran, Assistant Director; Matthew Gever; Elizabeth Morrison; Eden Savino; Kristin Van Wychen; and Jennifer Whitworth made key contributions to this report.
GAO has designated Medicare as a high-risk program, in part because its size and complexity make it particularly vulnerable to fraud. To help detect and prevent potential Medicare fraud, CMS--the agency within the Department of Health and Human Services (HHS) that administers the Medicare program--contracts with ZPICs. These contractors are to identify potential fraud, investigate it thoroughly and in a timely manner, and take swift action, such as working to revoke suspect providers' Medicare billing privileges and referring potentially fraudulent providers to law enforcement. GAO examined (1) ZPIC contract costs and how ZPICs use those funds, (2) the results of ZPICs' work, and (3) the results of CMS's evaluations of ZPICs' performance and aspects of CMS's evaluation practices. To do this, GAO examined ZPIC funding, contracts, and related documents; data on ZPICs' workloads, investigations, and results; and CMS evaluations of ZPICs as well as federal standards for performance measurement. GAO also interviewed CMS and ZPIC officials. The Centers for Medicare and Medicaid Services (CMS) paid its Zone Program Integrity Contractors (ZPIC) about $108 million in 2012. ZPICs reported spending most of this funding on fraud case development, primarily for investigative staff, who in 2012 reported conducting about 3,600 beneficiary interviews, almost 780 onsite inspections, and reviews of more than 200,000 Medicare claims. ZPICs reported that their actions resulted in more than $250 million in savings to Medicare in calendar year 2012 from actions such as stopping payment on suspect claims. ZPICs also reported taking other actions to protect Medicare funds, including having more than 130 of their investigations accepted by law enforcement for potential prosecution, and working to stop more than 160 providers from receiving additional Medicare payments in 2012. However, CMS lacks information on the timeliness of ZPICs' actions--such as the time it takes between identifying a suspect provider and taking actions to stop that provider from receiving potentially fraudulent Medicare payments--and would benefit from knowing if ZPICs could save more money by acting more quickly. ZPICs generally received good ratings in annual reviews, with five of six eligible for incentive awards. CMS follows some best practices for ZPICs' oversight, but the agency does not clearly link ZPIC performance to agency program integrity goals. The majority of the measures CMS uses to evaluate ZPICs relate to the quality of their work because, according to CMS officials, quality is the most important element. However, evaluation of such measures, while a best practice, does not connect ZPIC work to agency performance measures. For example, CMS aims to increase the percentage of actions taken against certain high risk Medicare providers--work central to ZPICs--but does not explicitly link ZPICs' work to the agency's progress toward that goal, another best practice that would allow the agency to better assess the ZPICs' support of CMS's fraud prevention efforts. GAO recommends that CMS collect and evaluate information on the timeliness of ZPICs' investigative and administrative actions, and develop ZPIC performance measures that explicitly link ZPICs' work to Medicare program integrity performance measures and goals. GAO requested comments from HHS on the draft report, but none were provided.
Coastal properties in the United States that lie on the Atlantic Ocean and the Gulf of Mexico are at risk of both flood and wind damage from hurricanes. One study put the estimated insured value of coastal property in states on these coasts at $7.2 trillion as of December 2004, and populations in these areas are growing. Property owners can obtain insurance against losses from wind damage through private insurance markets or, in high-risk coastal areas in some states, through state wind insurance programs. Flood insurance is generally excluded from such coverage, but property owners can obtain insurance against losses from flood damage through NFIP, which was established by the National Flood Insurance Act of 1968. As we have reported, insurance coverage gaps and claims uncertainties can arise when coverage for hurricane damage is divided among multiple policies because the extent of coverage under each policy depends on the cause of the damages, as determined through the claims adjustment process and the policy terms that cover a particular type of damage. This adjustment process is complicated when a damaged property has been subjected to a combination of high winds and flooding and evidence at the damage scene is limited. Other claims concerns can arise on such properties when the same insurer serves as both the NFIP’s WYO insurer and the property-casualty (wind) insurer. In such cases, the same company is responsible for determining damages and losses to itself and to the NFIP, creating an inherent conflict of interest. H.R. 3121, the Flood Insurance Reform and Modernization Act of 2007, set an effective date for its proposed flood and wind insurance program of June 28, 2008. A version of this bill, S. 2284, was introduced in the Senate in November of 2007, but this version did not include provisions that would establish a federal flood and wind program. As of March 2008, no additional action had been taken on S. 2284. In a September 26, 2007, Statement of Administration Policy regarding H.R. 3121, the Executive Office of the President stated that the Administration strongly opposes the expansion of NFIP to include coverage for windstorm damage. H.R. 3121’s provisions include the following: In order for individual property owners to be eligible to purchase federal flood and wind coverage, their communities must have adopted adequate mitigation measures that the Director of FEMA finds are consistent with the International Code Council’s building codes for wind mitigation. The Director of FEMA is expected to carry out studies and investigations to determine appropriate wind hazard prevention measures, including laws and regulations relating to land use and zoning; establish criteria based on this work to encourage adoption of adequate state and local measures to help reduce wind damage; and work closely with and provide any technical assistance to state and local governmental agencies to encourage the application of these criteria and the adoption and enforcement of these measures. Property owners who purchase a combined federal flood and wind insurance policy cannot also purchase an NFIP flood insurance policy. Federal flood and wind insurance will cover losses only from physical damage from flood and windstorm (including hurricanes, tornadoes, and other wind events), but no distinction between flood and wind damage need be made in order for claims to be paid. Premium rates are to be based on risk levels and accepted actuarial principles and will include all operating costs and administrative expenses. Residential property owners can obtain up to $500,000 in coverage for damages to any single-family structure and up to $150,000 in coverage for damage to contents and any necessary increases in living expenses incurred when losses from flooding or windstorm make the residence unfit to live in. Nonresidential property owners can obtain up to $1,000,000 in coverage for damages to any single structure and up to $750,000 in coverage for damage to contents and for losses resulting from an interruption of business operations caused by damage to, or loss of, the property from flooding or windstorm; If at any time FEMA borrows funds from the Treasury to pay claims under the federal flood and wind program, until those funds are repaid the program may not sell any new policies or renew any existing policies. Over 20,000 communities across the United States and its territories participate in the NFIP by adopting and agreeing to enforce state and community floodplain management regulations to reduce future flood damage. In exchange, the NFIP makes federally backed flood insurance available to homeowners and other property owners in these communities. Homeowners with mortgages from federally regulated lenders on property in communities identified to be in special high-risk flood hazard areas are required to purchase flood insurance on their dwellings. Optional, lower-cost coverage is also available under the NFIP to protect homes in areas of low to moderate risk. Premium amounts vary according to the amount of coverage purchased and the location and characteristics of the property to be insured. When the NFIP was created, Congress mandated that it was to be implemented using “workable methods of pooling risks, minimizing costs, and distributing burdens equitably” among policyholders and taxpayers in general. The program aims to make reasonably priced coverage available to those who need it. The NFIP attempts to strike a balance between the scope of the coverage provided and the premium amounts required to provide that coverage and, to the extent possible, the program is designed to pay operating expenses and flood insurance claims with premiums collected on flood insurance policies rather than tax dollars. However, as we have reported before, the program, by design, is not actuarially sound because Congress authorized subsidized insurance rates for some policies to encourage communities to join the program. As a result, the program does not collect sufficient premium income to build reserves to meet the long-term future expected flood losses. FEMA has statutory authority to borrow funds from the Treasury to keep the NFIP solvent. In 2005, Hurricanes Katrina, Rita, and Wilma had a far-reaching impact on NFIP’s financial solvency. Legislation incrementally increased FEMA’s borrowing authority from a total of $1.5 billion prior to Hurricane Katrina to $20.8 billion by March 2006, and as of December 2007, FEMA’s outstanding debt to the Treasury was $17.3 billion. As we have reported, it is unlikely that FEMA can repay a debt of this size and pay future claims in a program that generates premium income of about $2 billion per year. To implement a combined federal flood and wind insurance program, FEMA would need to complete a number of steps, similar to those undertaken to establish the NFIP, which would require the agency to address several challenges. First, FEMA would need to undertake studies in order to determine appropriate building codes that communities would be required to adopt in order to participate in the combined program. Second, FEMA would need to adapt existing processes under the NFIP flood program to accommodate the addition of wind coverage. For example, FEMA could leverage current processes under the WYO program and the Direct Service program to perform the administrative functions of selling and servicing the combined federal flood and wind insurance policy. Third, to set wind rates, FEMA would have to create a rate-setting structure, which would require contractor support. Fourth, promoting the combined federal flood and wind insurance program in communities would require that FEMA staff raise awareness of the combined program’s availability and coordinate enforcement of the new building codes. Finally, FEMA is facing a $17.3 billion deficit and attempting to address several management and oversight challenges associated with the NFIP, and balancing those demands with expanding staffing capacity to adjust existing administrative, operational, monitoring, and oversight processes and establish new ones to accommodate wind coverage could further strain FEMA’s ability to effectively manage the NFIP. H.R. 3121 would require FEMA to determine appropriate wind mitigation measures that communities would be required to adopt in order to participate in the combined flood and wind program. For several reasons, this could be a challenging process. First, FEMA would have to determine how to most effectively integrate a new federal wind mitigation standard with existing building codes for wind resistance. As we discussed in a previous report, as of January 2007, the majority of states had adopted some version of a model building code for commercial and residential structures. However, some local jurisdictions within states had not adopted a statewide model code and had modified the codes to reflect local hazards. Standards determined by FEMA to be appropriate for participation in the combined federal flood and wind program could conflict with those currently used by some states and local jurisdictions, and resolving any such differences could be challenging. Second, as it did with the NFIP, FEMA would have to address constitutional issues related to federal regulation of state and local code enforcement. Further, FEMA would need to establish regulations similar to those governing the flood program to allow for appeals by local jurisdictions, a process that could be time intensive. Third, as we have noted in a previous report, reaching agreement with communities on appropriate mitigation measures can be challenging, as communities often resist changes to building standards and zoning regulations because of the potential impact on economic development. For example, community goals such as housing and promoting economic development may be higher priorities for the community than formulating mitigation regulations that may include more rigorous developmental regulations and building codes. Fourth, according to FEMA officials, the agency would have to resolve potentially conflicting wind and flood standards. For example, they told us that flood building standards require some homes to be raised off the ground, but doing so can increase a building’s susceptibility to wind damage because the buildings are then at a higher elevation. While some of the NFIP’s current processes could be leveraged to implement a combined federal flood and wind program, they would need to be revised, an action that could pose further challenges for FEMA. According to FEMA officials, both the NFIP’s WYO and Direct Service programs could be used, with some revisions, to sell and underwrite the combined federal flood and wind insurance policy. The provision within H.R. 3121 that prevents FEMA from selling new policies or renewing existing policies if it borrows funds to pay claims would necessitate that the agency segregate funds collected from premiums under the new combined program and the flood program to ensure that it has sufficient funds to cover all future costs without borrowing, especially in catastrophic loss years. While the NFIP Community Rating System (CRS), a program that uses insurance premium discounts to incentivize flood damage mitigation activities by participating communities, could be adapted for combined federal flood and wind insurance coverage, it would not be required for the new program to begin operations because community participation in CRS is voluntary. As part of the WYO program, private property-casualty insurers are responsible for selling and servicing NFIP policies, including performing the claims adjustment activities to assess the cause and extent of damages. FEMA is responsible for managing the program, including establishing and updating NFIP regulations, analyzing data to determine flood insurance rates, and offering training to insurance agents and adjusters. In addition, FEMA and its program contractor are responsible for monitoring and overseeing the quality of the performance of the WYO insurance companies to ensure that NFIP is administered properly. These duties under the WYO program would be amplified with the addition of wind coverage and, according to FEMA officials, would require FEMA to expand the staffing capacity to include those with wind peril insurance experience. In addition, FEMA would need to determine whether existing data systems would be adequate to manage an increased number of policies and track losses for the new program. FEMA could face several challenges in expanding the WYO program. First, program staff would need to determine how to manage and mitigate the potential conflict of interest for those companies in the WYO program that could be selling both their own wind coverage and the combined federal flood and wind coverage. Current WYO arrangements with the NFIP prevent WYO insurers from offering flood-only coverage of their own unless it supplements NFIP coverage limits or is part of a larger policy in which flooding is one of the several perils covered. H.R. 3121, however, does not appear to prevent companies that might sell a combined federal flood and wind policy from also selling wind coverage, which may be part of a homeowners policy. Without this restriction, a conflict of interest could develop because insurers would have an incentive to sell the combined federal policy to its highest-risk customers and their own policies to lower-risk customers. FEMA officials agreed that this would be an inherent conflict and noted that it would be difficult to prevent this from occurring without precluding the WYO insurers from selling their wind policies. Moreover, according to a WYO insurer with whom we spoke, attempting to eliminate the conflict by either restricting a WYO insurer from selling its own wind coverage or requiring it to sell both flood-only and the combined policy could discourage participation in the WYO program. As noted in a previous report, private sector WYO program managers have said that while NFIP has many positive aspects, working with it is complex for policyholders, agents, and adjusters. According to another WYO insurer we spoke with, adding wind coverage could increase these complexities. FEMA officials told us that the agency could also sell and service the combined flood and wind insurance policies through its Direct Service program, which is designed for agents who do not have agreements or contracts with insurance companies that are part of the WYO program. According to FEMA officials, the Direct Service program of NFIP currently writes about 3 percent of the more than 5.5 million NFIP policies sold. Further, as with the WYO program, FEMA may have to contend with an inherent conflict of interest, and expand staffing capacity including adding staff with wind peril insurance expertise in the Direct Service program to administer, monitor, and oversee the sale of the new product. H.R. 3121 calls for FEMA to establish comprehensive criteria designed to encourage communities to participate in wind mitigation activities. As previously noted, the CRS program would be an important means of incentivizing wind mitigation activities in communities, but would not be necessary for the combined federal flood and wind insurance program to operate. According to FEMA, while the CRS process could be adapted for wind coverage, the agency would have to assess current practices, evaluate standards, and devise an appropriate rating system; a developmental process similar to what occurred for the NFIP. FEMA officials told us that it took approximately 5 years to develop the program, during which time extensive evaluation, research, and concept testing occurred. They estimate that replicating a similar approach for wind hazard would require at least the same number of years if not more, recognizing the complexities of current insurance industry experience associated with the wind peril and the complexities involved with evaluating current building code practices related to wind and other wind mitigation techniques. Establishing a new rate-setting structure for a combined federal flood and wind insurance program could pose another challenge for FEMA. According to several insurers and modeling consultants, wind modeling is the accepted method of determining wind-related premium rates, and FEMA does not have the necessary in-house wind modeling and actuarial expertise needed to develop and interpret wind models and translate the model’s output into premium rates. They told us that modeling has several advantages in rate setting over methods that place greater emphasis on loss data from past catastrophic events, such as the method used by NFIP to determine flood insurance premium rates. For example, modeling uses wind speed maps and other data to account for the probability that properties in a certain geographic area might experience losses in the future, regardless of whether those properties have experienced losses in the past. In addition, according to a modeling expert, wind modeling incorporates mitigation efforts at the property level because it can estimate the potential reductions in damage without waiting to see how the efforts actually affect losses during a storm or other event. While several modeling companies that are already providing wind modeling to private sector insurers and state wind insurance programs exist, it is not clear how much such services would cost FEMA. And while FEMA officials told us that the agency would have to contract out for wind-modeling services because it lacks the necessary wind and actuarial expertise, the agency could benefit from at least some in-house expertise in these areas in order to oversee the contractors that will provide these services. FEMA would also need to determine to what extent it might need to use wind speed maps in its rate determination process. Flood maps are currently used in the NFIP to identify areas that are at risk of flooding and thus the areas where property owners would benefit from purchasing flood insurance. If FEMA determined that wind maps were necessary, it would then need to determine whether the agency could develop such maps on its own or whether contracting with wind-modeling experts would be required, and what the cost of these efforts might be. Implementing the combined program would require FEMA to promote participation among communities and coordinate enforcement, a task that could be challenging for FEMA for two reasons. First, FEMA would need to manage community and state eligibility to participate in the program. The proposal calls for FEMA to work closely with and provide any necessary technical assistance to state, interstate, and local governmental agencies, to encourage the adoption of windstorm damage mitigation measures by local communities and ensure proper enforcement. While communities themselves are responsible for enforcing windstorm mitigation measures, FEMA officials told us they would have to coordinate with existing code groups to provide technical assistance training and guidance to local officials, and establish a wind mitigation code enforcement compliance program that would monitor, track, and verify community compliance with wind mitigation codes. According to an official at an organization representing flood hazard specialists, some communities are very good at ensuring compliance, while others are not. For example, in some larger communities, a city or county may have experts with vast experience in enforcing building codes and land use standards, but in other communities, a local clerk or city manager with little or no experience may be responsible for compliance. According to FEMA, the effectiveness of mitigation measures is entirely dependent on enforcement at the local level. Proper enforcement would require that resources were in place to pay for and train qualified inspectors and building department staff. Second, FEMA would need to generate public awareness on the availability of wind insurance through the NFIP. Efforts to adopt new mitigation activities and strategies have been constrained by the general public’s lack of awareness and understanding about the risk from natural hazards. To address this issue in NFIP, FEMA launched an integrated mass marketing campaign called FloodSmart to educate the public about the risks of flooding and to encourage the purchase of flood insurance. As we have noted in a previous report, according to FEMA officials, in a little more than 2 years since the contract began, in October 2003, net policy growth was a little more than 7 percent and policy retention improved from 88 percent to 91 percent. Educating the public on a new combined federal flood and wind insurance program and promoting community participation could demand a similar level of effort by FEMA to encourage participation. Implementing a combined flood and wind insurance program and overseeing the requisite contractor-supported services could place additional strain on FEMA, which is already faced with NFIP management and oversight challenges and a $17.3 billion deficit that it is unlikely to be able to repay. In March 2006, we placed the NFIP on our high-risk list because of its fiscal and management challenges. In addition to the agency’s current debt owed to the Treasury, FEMA is challenged with providing effective oversight of contractors. For example, as previously reported, FEMA faces challenges in providing effective oversight of the insurance companies and thousands of insurance agents and claims adjusters that are primarily responsible for the day-to-day process of selling and servicing flood insurance policies through the WYO program. In FEMA’s claims adjustment oversight, the agency cannot be certain of the quality of NFIP claims adjustments that allocate damage to flooding in cases involving damage caused by a combination of wind and flooding. Expanding the WYO program to include combined flood and wind policies could increase the NFIP’s oversight responsibilities as well as make resolving existing management challenges more difficult. In addition, FEMA faces ongoing challenges in working with contractors and state and local partners—all with varying technical capabilities and resources—in its map modernization efforts, which are designed to produce accurate digital flood maps. Ensuring that map standards are consistently applied across communities once the maps are created will also be a challenge. To the extent that FEMA uses wind speed maps under the combined program, the agency could face challenges similar to those currently faced by the NFIP’s flood-mapping program. New management challenges created by implementing a combined federal flood and wind program could make addressing these existing challenges even more difficult. According to FEMA officials, implementing a new flood and wind program is a process that would likely take several years and would require a doubling of current staff levels. Determining appropriate wind mitigation measures, adapting existing WYO and Direct Service processes for wind coverage, establishing a new rate-setting process, promoting community participation, and overseeing the combined program would all require additional staff and contractor services with the appropriate wind expertise. While the total cost of adding staff and hiring contractors with wind expertise is not clear, FEMA’s 2007 budget for NFIP salaries and expenses was about $38.2 million. Setting premium rates that would adequately reflect all expected costs without borrowing from the Treasury would require FEMA to make a number of sophisticated determinations. To begin with, FEMA would need to determine what those future costs are likely to be, a process that can be particularly difficult with respect to catastrophic losses. Once FEMA has determined the expected future costs of the program, it would need to determine premium rates adequate to cover those costs, a challenging process in itself for several reasons. First, the rate would need to be sufficient to pay claims in years with catastrophic losses without borrowing funds from the Treasury. This determination could be particularly difficult because it is unclear whether the program might be able to purchase reinsurance, and because attempting to build up a sufficient surplus to pay for catastrophic losses would require high premium rates compared to the size of expected claims and an unknown number of years without larger than average losses, over which FEMA has no control. Second, rate setting would have to account for two factors: adverse selection, or the likelihood that the program would insure only the highest-risk properties, and potentially limited participation because of comparatively low coverage limits. Both of these factors would necessitate higher premium rates, which could make rate setting very difficult. Finally, although no distinction between flood and wind damage would be necessary for property owners to receive payment on claims, such a distinction would still be necessary for rate-setting purposes. The proposed flood and wind program would be required, by statute, to charge premium rates that were actuarially sound—that is, that were adequate to pay all future costs. As a result, in setting rates FEMA would need to determine how much the program would be required to pay, including in years with catastrophic losses, and use this amount in setting rates, as is done by private sector insurers. H.R. 3121 does not specify how a federal flood and wind program would pay for catastrophic losses beyond charging an adequate premium rate. According to insurers and industry consultants we spoke with, making such determinations can be difficult and involve balancing the ability to pay extreme losses with the ability to sell policies at prices people will pay. For example, insurers could charge rates that would allow them to pay claims on the type of event they would expect to occur only very rarely, but the resulting rates could be prohibitively expensive. On the other hand, charging premium rates that would enable an insurer to pay losses on events of limited severity could allow them to sell policies at a lower price, but could also result in insufficient funds to pay losses if a larger loss were to occur. Insurers can come to different conclusions over the appropriate level of catastrophic losses on which to base their premium rates. For example, one state regulator said that some private sector insurers in his state used an event he believes has about a 0.4 percent chance of occurring in a given year, but that the state wind insurance program based its rates on events he believes have about a 1 percent chance of occurring. For comparison, one consultant we spoke with believed that an event of the severity of Hurricane Katrina had about a 7 percent chance of occurring in a given year. Determining the losses the program might be required to pay, especially in the event of a catastrophic event, could be especially important for FEMA. This is because if an event occurs that generates losses beyond an amount the program is prepared to pay, the program would be forced to borrow funds to pay those losses, triggering a borrowing restriction that would force it to stop renewing or selling new policies, effectively ending the program. On the other hand, premium rates high enough to pay losses resulting from the most severe catastrophic events might make the program prohibitively expensive for property owners. Determining expected losses for the first year of the program would be complicated by the fact that FEMA would not know what type of properties would be insured. Private sector insurers set their premium rates using models that take into account several variables, including the number of properties to be insured, the risks associated with the properties’ location, and the characteristics of the properties themselves. This information is used in the wind-modeling process to create a variety of scenarios that result in losses of differing severity that can then be used to create possible premium rates. Existing insurers have established portfolios of polices and can use data from these portfolios in the modeling process. A new combined federal flood and wind insurance program, according to wind-modeling companies we spoke with, would need to develop a hypothetical portfolio, making assumptions about how many policies it might sell and where, as well as the characteristics of the properties that might be insured. Such assumptions can be challenging because the number and type of properties insured will, in turn, be affected by the price of coverage. Once FEMA determines the severity of catastrophic losses a federal program would be required to pay, the agency would need to determine a premium rate that is adequate to pay such losses. This determination could be particularly difficult with regard to paying catastrophic losses— something that could occur in any year given the volatility of wind and flood losses—because of the borrowing restriction in H.R. 3121. Because it would be difficult, if not impossible, to repay any borrowed funds without the premium income from new or existing policies, this restriction, if invoked, could end the program. This would effectively require the program to charge premium rates sufficient to pay catastrophic losses without borrowing. Private sector insurers generally ensure their ability to pay catastrophic losses by purchasing reinsurance, and include the cost of this coverage in the premium rate they charge. However, reinsurance may not be an option for FEMA. Some reinsurance industry officials we spoke with said that the potential for the program to insure a large number of only high-risk properties could create a risk of high losses that could make reinsurers reluctant to offer coverage. Another option would be to charge a premium rate high enough to build up a surplus adequate to pay for catastrophic losses. However, such a rate would likely be high, and it would require an unknown number of years of operations with lower than average losses to build up a sufficient surplus, over which FEMA has no control. For example, a loss that exceeds the program’s surplus could occur in the early years, or even the first year, of the program’s operations, potentially forcing the program to borrow funds to pay losses and effectively ending the program. In determining a premium rate for a federal flood and wind program that was adequate to pay all future costs, FEMA would also need to take into account the adverse selection—the tendency to insure primarily the highest risks—and limited participation the program would likely experience. These factors can make rate setting difficult because they can both lead to increased premium rates, which can, in turn, lead to further adverse selection, limited participation, and the need for additional rate increases. For several reasons, a federal flood and wind program would probably insure mostly high-risk properties. First, a policy that combines flood and wind insurance would likely be of interest only to property owners who perceived themselves to be at significant risk of both flood and wind damage. Because consumers tend to underestimate their risk of catastrophic loss, those property owners who saw the need for a combined flood and wind policy would likely be those who knew they faced a high risk of loss. In addition, because the policy would include coverage for damage from flooding, those buying it would probably already have flood insurance, which is currently purchased almost exclusively in high-risk areas where lenders require it. As shown in figure 1, areas where there have been multiple floods as well as hurricanes and where consumers are most likely to see a need for both flood and wind coverage are primarily limited to the eastern and Gulf coasts. Second, a combined federal flood and wind insurance policy is likely to be of interest only in areas where state insurance regulators have allowed insurers to exclude coverage for wind damage from homeowners policies that they sell. According to several insurance industry officials we spoke with, in order to help protect consumers, state insurance regulators generally prohibit insurers from excluding wind damage from homeowners policies. According to insurers we spoke with, insurers can profitably write homeowners policies that include wind coverage in most areas. Only in the coastal areas that are at the highest risk of hurricane damage have insurers asked for and received permission from state regulators to sell homeowners policies that exclude wind coverage. Property owners who already have wind coverage through their homeowners policies—generally those living in areas outside the highest-risk coastal areas—would generally not be interested in a combined federal flood and wind insurance policy because they would already have wind coverage. Once again then, only property owners in high-risk coastal areas would be the most interested in purchasing a federal policy. A federal flood and wind insurance program would find itself in the same situation as state wind insurance programs that generally sell wind coverage only in areas where insurers are allowed to exclude it from homeowners policies. According to officials from the state wind programs we spoke with, their programs generally insure only the highest-risk properties. For several reasons, participation in a federal flood and wind program would probably be limited. First, a federal flood and wind insurance policy would likely cost more than purchasing a combination of flood insurance through the NFIP and wind insurance through a state wind insurance program, potentially limiting participation in the program. With respect to coverage for damages from flooding, while an estimated 24 percent of NFIP policyholders receive subsidized premium rates—with average subsidies of up to 60 percent—H.R. 3121 would require the new program to charge rates adequate to cover all future costs, potentially precluding any subsidies. As a result, the flood-related portion of a federal flood and wind policy would cost more than an NFIP flood policy for any property owners currently receiving subsidized NFIP flood rates. With respect to the wind portion of the coverage, a number of state wind insurance programs typically do not charge rates that are adequate to cover all costs, so a policy from a federal program that did charge adequate rates would likely cost more than a state wind program policy. Property owners who are receiving subsidized NFIP rates and relatively low state wind insurance rates are unlikely to be willing to move to a new program that would be more expensive. Second, a federal flood and wind policy would have lower coverage limits than the flood and wind coverage currently available in high-risk coastal areas, further limiting participation. Currently, property owners in coastal areas subject to both flood and wind damage can purchase flood insurance through the NFIP and, in some areas, wind insurance through a state wind insurance program. Table 1 compares the policy limits for a federal flood and wind policy, as proposed in H.R. 3121, with a combination of policy limits from state wind insurance program and NFIP policies. While the federal flood and wind policy would cover a maximum of $650,000 in damage for a residential property, a combination of NFIP and state wind program policies would provide on average, around $1.7 million in coverage, or about 166 percent more coverage, depending on the state. For commercial properties, the federal flood and wind policy would offer up to $1.75 million in coverage, but combined NFIP and state wind program policies would offer, on average, almost $4 million or 126 percent more coverage. Table 1. Comparison of Combination of State Wind Program and H.R. 3121 Flood Insurance Policy and State Wind Policy Limits with H.R. 3121 Flood and Wind Policy Limits (C) (B) Combined NFIP flood and state wind program (D) (A-B) (C-D) Adverse selection and limited participation could, in turn, force FEMA to raise rates still higher for the projected program, leading to escalating premiums. This possibility further complicates the rate-setting process. In general, having only a small pool of very high-risk insureds requires insurers to charge premium rates at levels above what could be charged if the risk were spread among a larger pool of insureds of varying risk levels. As we have discussed, high premium rates can, in turn, further reduce the number of property owners who are able and willing to pay for coverage and force insurers to raise rates yet higher. This cycle, referred to as an adverse selection spiral, can make it very difficult for insurers to find a premium rate that is adequate to cover losses. Finally, although H.R. 3121 stipulates that a distinction between flood and wind damage would not be required for a policyholder’s claim to be paid by a federal flood and wind program, a determination of the cause of damage would likely still be necessary for rate-setting purposes. According to several insurance industry officials we spoke with, separate determinations would be required because data on the losses associated with each type of damage are used to help determine future rates. For example, data on wind losses would be used to validate the losses predicted by wind models. While the officials said that such determinations would not need to be as accurate as when the distinction between flood and wind damage would determine under which policy a claim was covered, they would still need to be made. As a result, FEMA would need to determine whether and how such a determination might be made by FEMA staff, or if it would need to establish another process for doing so. While a combined federal flood and wind program would entail costs, it could benefit some property owners and market participants. First, property owners could benefit from reduced delays in payments and assured coverage in high-risk areas. In addition, taxpayers in some states could benefit to the extent that the exposure to loss of state wind insurance programs is reduced. At the same time, these benefits could be limited by a borrowing restriction that could terminate the program after a catastrophic event, and comparatively low coverage limits could leave some property owners underinsured. Third, private sector insurers could also benefit if high-risk properties moved to a federal program, reducing the companies’ risk of loss. But this shift would further limit private sector participation. Finally, while H.R. 3121 would require premium rates that were adequate to cover all future costs, actual losses can significantly exceed even the most carefully calculated loss estimates, as we learned from the 2005 hurricanes, potentially leaving the federal government with exposure to new and significant losses. Although a combined flood and wind program could provide benefits to some property owners, states, and insurers, it could expose the federal government to an increased exposure to loss. While the actual exposure that a federal flood and wind program might create is unclear, the likelihood for the program to insure primarily high-risk properties could create a large exposure to loss. As of 2007, wind programs in eight coastal states—programs that insure primarily high-risk coastal properties—had a total loss exposure of nearly $600 billion . While it is unclear how much of this exposure would be assumed by the federal program, a risk management consulting firm developed another estimate of potential wind-related losses that took into account the federal program’s likely adverse selection. Assuming that the program experienced just a moderate amount of adverse selection, and that the program would write coverage for around 20 percent of the current market for wind coverage, the firm used wind modeling technology to estimate the potential wind- related losses. The estimates ranged from around $6.5 billion in losses for the type of catastrophe that has a 10 percent chance of occurring each year, $11.4 billion for one that has a 5 percent chance of occurring each year, to around $32.7 billion for the type that has a 1 percent chance of occurring each year. The same firm that did the modeling for this estimate considered Hurricane Katrina to be the type of event that has a 6.6 percent chance of occurring in any year. For purposes of comparison, NFIP flood losses from Hurricane Katrina alone totaled around $16 billion, and according to the Insurance Services Office, losses paid by private sector insurers—most of which were wind-related—totaled around $41 billion. The potential exposure to the federal government, however, could be reduced by several factors. First, the program could encourage mitigation efforts that would reduce damage from wind. As noted earlier in this report, H.R. 3121 would require communities to adopt mitigation standards approved by the Director of FEMA and consistent with International Code Council building codes related to wind mitigation. In addition, H.R. 3121 would require the Director of FEMA to carry out studies and investigations to determine appropriate wind hazard prevention measures. Further, according to FEMA, the CRS structure could be applied to a federal flood and wind program, reducing premium rates for communities and property owners that implemented wind mitigation measures. Such measures could reduce losses due to wind damage and thus the federal government’s exposure to loss. Second, the federal government’s exposure is potentially limited to the amount FEMA is authorized to borrow from the Treasury, which was raised to $20.8 billion in March of 2006. However, if losses were to exceed this limit, Congress would be faced with raising the amount FEMA could borrow, thereby increasing the government’s exposure or failing to pay policyholders up to the full amounts specified in their policies. While H.R. 3121 would require a federal flood and wind program to charge premium rates that were adequate to pay all future losses in order not to create additional liability for the federal government, as we have seen, estimating future losses is difficult, and losses can exceed expectations. For example, losses from Hurricane Katrina and other hurricanes were beyond what NFIP could pay with the premiums it had collected. NFIP reported unexpended cash of approximately $1 billion following fiscal year 2004, but as of May 2007 the program had suffered almost $16 billion in losses from Hurricane Katrina. In addition, officials from several wind- modeling companies told us that the severity of Hurricane Katrina was well beyond their previous expectations, and rates that they had believed were actuarially sound turned out to be inadequate. As a result, they have had to revise their models accordingly. If losses for a combined flood and wind program did exceed the premiums collected by the program, FEMA could be forced to borrow from the Treasury to pay those losses. As of December 2007, FEMA still owed approximately $17.3 billion to the Treasury, an amount it is unlikely able to repay. In addition, the requirement in H.R. 3121 to stop renewing or selling new polices until such losses are repaid could actually increase the cost to the federal government. This is because the program’s source of revenue, which it could use to pay back the borrowed funds, would be limited to premiums paid by those whose policies had not yet come up for renewal. And once those policies expired, the program would receive no premium income. It is not clear how any debt remaining outstanding at that time would be paid, and the costs could fall to the federal government and, ultimately, taxpayers. We requested comments on a draft from FEMA and NAIC. FEMA provided written comments that are reprinted in appendix II. NAIC orally commented that they generally agreed with our report findings. FEMA also generally agreed with our findings and emphasized the challenges it would face in addressing several key issues. Finally, FEMA provided technical comments, which we incorporated as appropriate. In their comments, FEMA officials stressed their concerns over the effect that the program’s proposed borrowing restriction would have on their ability to set adequate premium rates. Specifically, they said that It would be nearly impossible to set premium rates high enough to eliminate the possibility of borrowing to pay catastrophic losses. Purchasing enough reinsurance to pay all catastrophic losses without borrowing, even if it were possible, would require premium rates so high as to be unaffordable. The high variability of combined flood and wind coverage means that there is always the possibility of catastrophic losses in any given year regardless of how premiums are designed. In addition, FEMA officials said that the termination of the program due to the borrowing restriction would create other difficulties. They said that not only could it leave property owners without coverage, but it could also prevent the program from repaying any borrowed funds. As stated in our report, the proposed borrowing restrictions would make rate setting a difficult and challenging process, and could result in high premium rates. In addition, we stated that termination of the program due to the borrowing restriction could potentially leave some property owners uninsured following a catastrophic event and limit FEMA’s ability to repay any borrowed funds. Finally, we acknowledged that the high variability of flood and wind losses would make setting rates adequate to pay losses without borrowing even more challenging, and we clarified language in the report that the risk of catastrophic losses could occur in any year regardless of how premiums are designed. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Ranking Member of the Committee on Financial Services, House of Representatives; the Chairman and Ranking Member of the Committee on Banking, Housing, and Urban Affairs, U.S. Senate; the Chairman and Ranking Member of the Committee on Homeland Security and Governmental Affairs, U.S. Senate; the Chairman and Ranking Member of the Committee on Homeland Security, House of Representatives; the Secretary of Homeland Security; the Executive Vice-President of NAIC; and other interested committees and parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objective was to examine the proposed federal flood and wind insurance program put forth in H.R. 3121, the Flood Insurance Reform and Modernization Act of 2007, in terms of (1) the program’s potential effects on policyholders, insurance market participants, and the federal government; (2) what would be required for Federal Emergency Management Agency (FEMA) to determine and charge actuarially sound premium rates; and (3) the steps FEMA would have to take to implement the program. To evaluate the program’s potential effects on policyholders, insurance market participants, and the federal government, we interviewed officials from the FEMA, the National Flood Insurance Program (NFIP), state insurance regulators, the National Association of Insurance Commissioners (NAIC), state wind insurance program operators, primary insurers, reinsurers, insurance and reinsurance associations, insurance agent associations, risk- modeling organizations, actuarial consultants, the American Academy of Actuaries (AAA), the Association of State Flood Plain Managers (ASFPM), the National Flood Determination Association (NFDA), and others. We also obtained information on state-sponsored wind insurance programs in three coastal states and one inland state, and discussed them with program officials as well as the insurance regulators within those states. We compared selected wind insurance program policies in force and exposure data from 2004 to the most recent available in eight states: Alabama, Florida, Georgia, Louisiana, Mississippi, North Carolina, South Carolina, and Texas. We also collected and analyzed state wind program data from these eight states and provisions of H.R. 3121 to compare the combination of state wind program and H.R. 3121’s flood insurance policy limits with H.R. 3121’s flood and wind policy limits. To develop our natural hazard risk maps, we used data from FEMA and the National Oceanic and Atmospheric Administration (NOAA). We used historical hazard data from 1980 to 2005 as a representation of current hazard risk for flood, hurricanes, and tornadoes. Finally, to evaluate the federal government’s exposure, we reviewed an estimate of potential wind-related losses for a federal program from an actuarial consulting firm. To examine the challenges FEMA would likely face in determining and charging a premium rate that would cover all expected costs, we spoke with FEMA/NFIP officials, state insurance regulators, NAIC, state wind insurance program operators, primary insurers, reinsurers, insurance and reinsurance associations, insurance agent associations, risk-modeling organizations, actuarial consultants, AAA, ASFPM, NFDA, and others. We also reviewed our previous reports and testimonies, Congressional Budget Office (CBO) reviews, and academic and other studies of coastal wind insurance issues. In addition, we reviewed information provided by professional associations, such as the American Insurance Association, and congressional testimony by knowledgeable individuals from the insurance industry, ASFPM, and NFDA. To examine the challenges FEMA would face in developing and implementing a federal flood and wind insurance program, we discussed the issue with FEMA/NFIP officials, state insurance regulators, NAIC, state wind insurance program operators, primary insurers, reinsurers, insurance and reinsurance associations, insurance agent associations, risk-modeling organizations, actuarial consultants, AAA, ASFPM, NFDA, and others. We also reviewed our previous reports on FEMA’s management and oversight of NFIP. In addition, we reviewed congressional testimony by knowledgeable individuals from the insurance industry, ASFPM, and NFDA. We conducted our work in Washington, D.C., and via telephone from October 2006 to April 2007 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Orice M. Williams, (202) 512-8678 or williamso@gao.gov. In addition to the person named above, Lawrence D. Cluff, Assistant Director; Farah B. Angersola; Joseph A. Applebaum; Tania L. Calhoun; Emily R. Chalmers; William R. Chatlos; Thomas J. McCool; Marc W. Molino; and Patrick A. Ward made key contributions to this report.
Disputes between policyholders and insurers after the 2005 hurricanes highlight the challenges of determining the cause and extent of damages when properties are subject to both high winds and flooding. Additionally, insurers want to reduce their exposure in high-risk areas, and state wind insurance programs have grown significantly. H.R. 3121, the Flood Insurance Reform and Modernization Act of 2007, would create a combined federal insurance program with coverage for both wind and flood damage. GAO was asked to evaluate this potential program in terms of (1) what would be required to implement it; (2) the steps the Federal Emergency Management Agency (FEMA) would need to take to determine premium rates that reflect all future costs; and (3) how it could affect policyholders, insurance market participants, and the federal government. To address these questions, GAO analyzed state and federal programs, examined studies of coastal wind insurance issues, and interviewed federal and state regulatory officials as well as industry participants and analysts. FEMA and the National Association of Insurance Commissioners generally agreed with GAO's report findings. FEMA emphasized the challenges it would face in addressing several key issues. FEMA also provided technical comments, which were incorporated as appropriate. To implement a combined federal flood and wind insurance program, FEMA would need to complete certain challenging steps. First, FEMA would need to determine wind hazard prevention standards that communities would have to adopt in order to receive coverage. Second, FEMA would need to adapt existing programs to accommodate wind coverage--for example, the Write Your Own program. Third, FEMA would need to create a new rate-setting process, as the process for setting flood insurance rates is different from what is needed for wind coverage. Fourth, promoting the new program in communities would require that FEMA staff raise awareness of the combined program's availability and coordinate enforcement of the new building codes. Finally, FEMA would need to put staff and procedures in place to administer and oversee the new program while it faces current management and oversight challenges with the National Flood Insurance Program (NFIP). Setting premium rates adequate to cover all the expected costs of flood and wind damage would require FEMA to make sophisticated determinations. For example, FEMA would need to determine how the program would pay claims in years with catastrophic losses without borrowing from the Department of the Treasury. H.R. 3121 would require the program to stop renewing or selling new policies if it needed to borrow funds, effectively terminating the program. It is also unclear whether the program could obtain reinsurance to cover such losses, and attempting to fund losses by building up a surplus would potentially require high premium rates and an unknown number of years without large losses, something over which FEMA has no control. Further, FEMA would need to account for the likelihood that participation would be limited and only the highest-risk properties would be insured. These factors would further increase premium rates and make it difficult to set rates adequate to cover future costs. A federal flood and wind insurance program could benefit some policyholders and market participants but would also involve trade-offs. For example, not requiring adjusters to distinguish between flood and wind damage could reduce both delays in reimbursing participants and the potential for litigation. However, borrowing restrictions could also leave property owners without coverage after a catastrophic event. In addition, the proposed coverage limits are relatively low compared with the coverage that is currently available, potentially leaving some properties underinsured. The program could also reduce the exposure of some insurers by insuring high-risk properties that currently have private sector coverage. However, an unknown portion of the exposure currently held by state wind programs--nearly $600 billion in 2007--could be transferred to the federal government. While H.R. 3121 would require premium rates to be adequate to cover any exposure and restrict borrowing by the program, the potential exists for losses to greatly exceed expectations, as happened with Hurricane Katrina in 2005. This could increase FEMA's total debt, which as of December 2007 was about $17.3 billion.
FDA, an agency within the Department of Health and Human Services (HHS), is responsible for overseeing the safety and effectiveness of drugs marketed within the United States. These responsibilities begin before a product is brought to the market, and include reviewing drug sponsors’ proposals for conducting clinical trials, providing advice and publishing guidance regarding these trials, as well as reviewing applications for new drugs. Once a drug sponsor identifies a promising chemical compound it believes to be capable of curing or treating diseases, the sponsor may decide to conduct clinical trials on humans to gather the evidence necessary to demonstrate to FDA that the drug is safe and effective for its intended use. Before beginning clinical trials in the United States, a sponsor generally must submit an investigational new drug application to FDA for review. This application provides FDA with extensive information about the drug, including safety and manufacturing information, and outlines the sponsor’s plans for clinical trials, which gradually introduce new drugs to increasingly larger numbers of patients. FDA assesses the information in the application—which is later included as part of the NDA—to ensure that the drug is reasonably safe to begin studying in humans. Sponsors may use these clinical trials to gather evidence of a drug’s safety and effectiveness. In general, FDA requires sponsors to submit the results of more than one clinical trial demonstrating effectiveness in order to provide substantial evidence that a drug is effective for the intended indication and population. FDA has issued regulations and guidance that provide industry with information to properly design, conduct, and interpret these trials. For example, in 1985 FDA substantially revised its regulations including the provision addressing the characteristics of adequate and well-controlled trials and the types of controlled trials that can be used to gather evidence of a new drug’s effectiveness. Sponsors may use trials of varying designs to obtain evidence of a drug’s effectiveness. One type of clinical trial is a non-inferiority trial. The objective of a non-inferiority trial is to show that any difference in the effectiveness of two drugs is small enough to allow a conclusion that the new drug is also effective, but not substantially less effective than the active control. To conduct a non-inferiority trial, sponsors must make many decisions regarding how the trial will measure the new drug’s effectiveness. For example, they must select the trial’s primary endpoint, the principal measure used to determine a drug’s effectiveness. The primary endpoint may be a clinical endpoint—a direct measure of how a patient feels, functions, or survives—or, in some cases, a surrogate endpoint—a laboratory measure or physical sign used as a substitute for a clinical endpoint that reasonably predicts a clinical benefit. Sponsors must also determine when to measure the trial’s endpoint—for example, are patients cured within 7, 14, or 30 days after starting treatment—in addition to determining the number and type of patients to be enrolled in each trial. Sponsors conducting non-inferiority trials must also make decisions to account for the new drug’s comparison to the active control. Sponsors must identify an available treatment for use as an active control in the non- inferiority trial. They must then use evidence of the active control’s effectiveness as shown in prior clinical trials to estimate the effect that the active control will have in the planned non-inferiority trial, adjusting for any differences between the prior and planned trials. Using this estimate, sponsors determine the trial’s non-inferiority margin—the maximum clinically acceptable extent to which the new drug can be less effective than the active control and still show evidence of an effect. FDA considers the selection of a margin to be the single greatest challenge in designing, conducting, and interpreting non-inferiority trials. Its calculation is not only dependent on a string of other decisions related to the trial—for example, the data collected on the active control’s effectiveness in other trials—but also includes the application of clinical judgment to determine the maximum amount of effectiveness that could be lost without having a substantial impact on the drug’s effectiveness. If a non-inferiority margin is incorrectly calculated and is set too large, a drug that is not effective may appear to be effective; if the margin is too small, an effective drug may appear to be ineffective. In a non-inferiority trial, patients are randomly assigned to receive either the new drug or an active control. After the trial, the sponsor identifies the observed effect of each drug in the trial, and calculates the observed difference in the drugs’ effectiveness. The actual difference in the drugs’ effectiveness in the entire population could be greater or less than what is observed in the trial. For that reason, sponsors calculate a confidence interval around the observed difference in effectiveness between the new drug and active control drug. The confidence interval provides a range of values for the difference in effectiveness within which the true difference is likely to be found. The confidence interval around the observed difference in effectiveness is used to determine if the new drug is non-inferior to the active control. It is compared to the non-inferiority margin—the maximum clinically acceptable extent to which the new drug can be less effective than the active control. If the confidence interval is within the non-inferiority margin, and the sponsor provides adequate evidence that the active control demonstrated its expected effect in the trial, the new drug may be deemed non-inferior to the active control. A new drug can be non-inferior to an active control even if the estimated difference in effectiveness and its confidence interval lies entirely below zero, meaning that the active control drug is more effective than the new drug, but by an irrelevant amount. However, if the confidence interval shows that the effect of the drug could be below the margin—even if the observed effect of the drug was within the margin—the drug would not have shown an effect, and is therefore considered inferior. In addition, if the confidence interval lies entirely above zero—demonstrating that the new drug is more effective than the active control—the drug can be considered superior. (See fig. 1.) Since issuing regulations that address the elements of adequate and well- controlled trials, FDA has also periodically issued guidance documents to provide updates on the agency’s current thinking on a range of topics. These guidance documents encompass broad issues such as statistical principles for use in clinical trials and how to select an appropriate control, whereas others are more focused and serve to consolidate relevant recommendations on the development of drugs treating a particular indication. In addition to disseminating guidance on non-inferiority trials, FDA provides specific advice regarding the design of clinical trials at the request of sponsors. For example, sponsors may ask FDA to review and provide advice on a trial’s proposed active control, non-inferiority margin, or endpoint before the given trial has begun. After the conclusion of their clinical trials, sponsors may consult with FDA regarding the interpretation of trial results or to discuss the information the agency would expect to see submitted in an NDA. FDA’s advice and recommendations to sponsors are considered advisory; sponsors are not required to implement any of the agency’s suggestions. If sponsors believe they have successfully demonstrated a new drug’s safety and effectiveness, they may submit an NDA to FDA for review. The NDA contains information about the safety and effectiveness of the drug as demonstrated in clinical trials and other research, such as studies in animals. Once the agency receives an NDA, the application is reviewed by one of FDA’s medical review divisions, depending on the indication the drug has been proposed to treat. If FDA determines that the drug is safe and effective for its intended use—that its clinical benefits outweigh its potential health risks—and that other requirements are met, it will approve the application. After approving a new drug, FDA’s responsibilities continue as it is charged with monitoring the safety, effectiveness, and promotion of approved drugs. FDA executes these responsibilities in the same manner regardless of whether drugs were approved on the basis of evidence from non-inferiority trials. Non-inferiority trials present unique issues in measuring the effectiveness of new drugs. For example, the use of these trials can raise uncertainties about the true effectiveness of new drugs because non-inferiority trials cannot measure this directly. Instead, these trials measure the effectiveness of the new drug relative to the active control, and sponsors must assess whether the active control can be considered to be as effective in the non-inferiority trial as was expected based on past experience. Using data from the non-inferiority trial and from prior trials measuring the effectiveness of the active control, the effectiveness of the new drug is estimated—but not ever fully known. In addition, non- inferiority trials are more prone to certain biases than superiority trials. For example, if patients in a superiority trial do not take the new drug as directed, this poor compliance will dilute the measured effectiveness of the new drug, making it less likely that the trial will successfully demonstrate superiority. In a non-inferiority trial, however, poor compliance by patients taking the active control drug can have a different effect. It can reduce the difference in the measured effectiveness between the new drug and the active control, making the treatments appear more similar than they might otherwise be. As such, poor compliance in a non- inferiority trial can increase the likelihood that an ineffective drug is concluded to be effective. The use of non-inferiority trials over time also raises concerns about the potential for “biocreep” to occur. This term is used to describe the concern that successive generations of drugs approved based on non-inferiority trials, with the active control changing in each new generation, could lead to the adoption of decreasingly effective drugs and ultimately to the approval of drugs that are no more effective than a placebo. Non- inferiority trials that are poorly designed are especially prone to biocreep. The selection of inappropriate active controls—that is, drugs that are not known to be consistently effective, or drugs that were themselves approved on the basis of non-inferiority trials—could lead to biocreep. Even if successive generations of non-inferiority trials are conducted and each trial is itself well-designed, biocreep may still occur because placebo controls are not included in these trials. Non-inferiority trials are only able to measure the effectiveness of the new drug relative to the active control, not a placebo. As a result, the true effectiveness of any of the new drugs, compared to a placebo, is not measured. Without this metric, it is impossible to determine the extent to which the effectiveness of the new drug is similar to that of a placebo and whether biocreep has occurred. FDA has acknowledged some concerns over the uncertainties inherent in non-inferiority trials and the potential these trials create for biocreep. For example, FDA stated in a 1992 guidance document that, in order to avoid biocreep, sponsors should consult with the agency regarding the active controls they were considering for their trials. In other guidance documents, FDA has also encouraged sponsors to consult with the agency regarding their planned non-inferiority trials. One-quarter of NDAs submitted to FDA for review from fiscal years 2002 through 2009 included evidence from non-inferiority trials, and many of these applications were for antimicrobial drugs. FDA approved a majority of the applications that included evidence from these trials. Forty-three, or one-quarter, of the 175 NDAs we reviewed that were submitted to FDA from fiscal years 2002 through 2009 included evidence from at least one non-inferiority trial. The number of NDAs with evidence from non-inferiority trials varied from year to year and generally declined from fiscal years 2002 through 2009. On average, FDA received five NDAs each year that included evidence from non-inferiority trials. (See fig. 2.) About half of the 43 NDAs submitted with evidence from at least one non- inferiority trial—or 22—were for antimicrobial drugs, such as those that treat bacterial, viral, or fungal infections. The remaining portion of NDAs submitted with evidence from these trials represented a variety of drug types. (See table 1.) FDA approved 29 of the 43 NDAs submitted for review from fiscal years 2002 through 2009 that included evidence from at least one non-inferiority trial. Most NDAs—18 of the 29—were approved based on evidence from pivotal non-inferiority trials. FDA approved the remaining 11 applications based on other evidence, such as the superiority of the new drug compared to a placebo or an active control. As of December 31, 2009, FDA had decided not to approve 14 applications that included evidence from dence from non-inferiority trials. (See fig. 3.) non-inferiority trials. (See fig. 3.) Many NDAs including evidence from non-inferiority trials were for antimicrobial drugs, and the majority of approvals based on this evidence were also for these types of drugs. Two-thirds, or 12 of the 18, NDAs approved on the basis of non-inferiority trials were for antimicrobial drugs. The remaining one-third of NDAs approved on the basis of non- inferiority trials were for various other types of drugs, including those treating diabetes and chemotherapy-induced nausea and vomiting. See appendix I for a list of all 18 NDAs approved based on evidence from non- inferiority trials, including fiscal year of approval, drug type, and approved indication. Characteristics varied among the non-inferiority trials providing primary evidence to support FDA’s approval of 18 NDAs. Some other applications also included non-inferiority trials that FDA identified as being poorly designed; these trials did not provide primary evidence for approval. Characteristics varied among the non-inferiority trials that provided primary evidence for the approval of the 18 NDAs. FDA relied on primary evidence from multiple pivotal non-inferiority trials to support the approval of most of these applications. The number of pivotal non- inferiority trials used as primary evidence for these 18 NDAs ranged from one to four, with an average of two pivotal non-inferiority trials supporting the approval of each application. In addition to including evidence from pivotal non-inferiority trials, five applications included evidence from other types of pivotal trials; for example, trials demonstrating superiority to a placebo or active control drug. Thirteen of the 18 applications included only pivotal non-inferiority trials in their applications. Of these applications, FDA approved four based on evidence from a single pivotal non-inferiority trial. Two-thirds, or 12, of the 18 NDAs included trials that measured drug effectiveness using a surrogate, rather than a clinical, primary endpoint in at least one of their pivotal trials. Although FDA generally prefers that drug sponsors demonstrate the effectiveness of a new drug by showing its impact on a clinical endpoint, in certain cases, it will consider a surrogate endpoint if it determines it is a reasonable substitute. However, all experts we interviewed who commented on this topic noted that the approval of drugs on the basis of both non-inferiority trials and surrogate endpoints increases uncertainty in the drugs’ true effectiveness. Half of the 18 NDAs FDA approved on the basis of non-inferiority trials tested the effectiveness of the new drug against more than one active control. A majority of the active controls used in non-inferiority trials were FDA-approved for the indication. However, three applications included evidence from non-inferiority trials that used one active control that was not FDA-approved for the indication. For example, in fiscal year 2003, FDA approved Cubicin for the treatment of complicated skin and skin structure infections on the basis of evidence from two pivotal non- inferiority trials that used a total of five different active control drugs. While three of these active control drugs were FDA-approved to treat this indication, two were not. In addition, some of the active controls used in non-inferiority trials were themselves approved on the basis of evidence from other trials that compared the drug to another active control. However, FDA reviewed the selection of nearly all of the active controls used in the pivotal non-inferiority trials that supported the approval of the 18 NDAs, and found the active controls appropriate for use in these trials. FDA officials also told us that if a new drug was approved on the basis of evidence from non-inferiority trials, the active control used in these trials would most likely also be used in subsequent trials, except in cases where the newer drug proved to be superior to the active control. The margins used for most of the 18 NDAs approved on the basis of evidence from non-inferiority trials ranged from 5 to 20 percent, with the most commonly used margin being 10 percent. That is, for trials using a 10 percent non-inferiority margin, the new drug could be estimated to be up to 10 percent less effective than the active control. However, the observed difference in the effectiveness of the new drug and active control, as measured in the clinical trials, would be less than 10 percent. At the time of its review of the NDAs, FDA agreed with the non-inferiority margins set for all of the pivotal trials submitted for the majority of drugs approved on the basis of evidence from non-inferiority trials. All of the pivotal trials submitted for these drugs—that is, those where FDA agreed with the margin—demonstrated that the new drug was non-inferior to the active control drug as measured on the primary endpoint, with one exception. These trials showed that the confidence interval for the difference in the drugs’ effectiveness was within the non-inferiority margin. FDA did not agree with the non-inferiority margins set for pivotal trials submitted with three applications, though the agency approved these drugs based on evidence from these trials. For two drugs, Exjade and Reyataz, FDA stated that the proposed margins could not be used to measure the drugs’ effectiveness. FDA conducted additional analyses of data from pivotal trials submitted in these drugs’ applications which showed that the drugs were superior to a placebo. For the third drug, Noxafil, FDA did not agree with the sponsor’s proposed justification of the margin for one trial, although this trial showed the difference in the drugs’ effectiveness to be less than the disputed margin. FDA approved Exjade in fiscal year 2006 to treat chronic iron overload in certain patients receiving blood transfusions. Exjade’s NDA included evidence from one pivotal non-inferiority trial that had an objective of showing that Exjade lowered iron levels to a similar extent as the active control. Upon reviewing the application, FDA disagreed with the non- inferiority margin proposed for this trial. FDA analyzed data from the trial which showed that Exjade was effective in lowering patients’ iron levels despite ongoing blood transfusions (which typically result in increased iron levels), particularly among those patients who began the trial with very high iron levels. FDA approved Exjade on the basis of this evidence, which showed that the drug would have been more effective than a placebo. In addition, FDA officials noted that Exjade presented a valuable alternative in the treatment of this indication. FDA approved Reyataz in fiscal year 2003 for the treatment of human immunodeficiency virus (HIV) infection. Reyataz’s NDA included evidence from two pivotal non-inferiority trials, including one in patients that were naïve to HIV treatment and one in patients that had experience receiving HIV treatment. FDA agreed with the margin proposed for the trial conducted in the treatment-naïve population, which was successful in demonstrating that Reyataz was non-inferior to its active control. However, FDA disagreed with the margin proposed for the trial conducted in the treatment-experienced population. Agency officials analyzed data from this trial which showed that Reyataz was effective in treatment- experienced patients, and this effect was greater than what would have been expected with a placebo. FDA approved Reyataz to treat HIV infection on the basis of this evidence, as well as other pivotal evidence of effectiveness in the treatment-naïve population. In addition, FDA officials noted that Reyataz presented an alternative to HIV-infected patients that were not responding to available HIV treatments. FDA approved Noxafil in fiscal year 2006 for the prevention of invasive Aspergillus and Candida infections in certain patients on the basis of evidence from two pivotal trials. In its review of this NDA, FDA noted that the sponsor had not adequately explained the relevance of the proposed 15 percent non-inferiority margin. One of these trials demonstrated that Noxafil was superior to its active control, and the other trial demonstrated that the drug was at most three percent less effective than the active control. FDA approved this drug on the basis of this evidence of effectiveness. Table 2 provides a summary of the characteristics of non-inferiority trials for the 18 NDAs we identified as approved on the basis of evidence from non-inferiority trials. We found that FDA reviewed the characteristics of the non-inferiority trials supporting the approval of the 18 NDAs to ensure that the drugs it approved were more effective than a placebo. FDA’s review therefore minimized the potential for biocreep. Similarly, our examination of the trials’ characteristics also revealed no evidence of biocreep. While non-inferiority trials provided primary evidence of effectiveness to support the approval of 18 NDAs, other non-inferiority trials were poorly designed and did not provide such evidence. Of the other 25 NDAs that included evidence from non-inferiority trials, FDA identified 9 applications that included poorly designed non-inferiority trials. These trials were unable to accurately measure the new drugs’ effectiveness and did not provide primary evidence for the approval of these drugs. Some of the concerns FDA identified with sponsors’ non-inferiority trials were inappropriate use of non-inferiority trials for the indication being treated, inappropriate selection of an active control, including cases where the drug was not FDA-approved or the sponsor did not provide an adequate justification, and improper calculation or justification of the non-inferiority margin. FDA informed sponsors of its concerns with all of these applications’ non- inferiority trials prior to the sponsors’ submission of the NDAs. Specifically, FDA notified the sponsors between 1 month and 94 months before submission, with an average of about 30 months prior to submission. With the exception of one application, FDA notified all sponsors at least 6 months prior to submission. For example, FDA advised one sponsor before the sponsor began its non- inferiority trials—24 months prior to submitting its NDA—that the agency did not consider it appropriate to use non-inferiority trials to support the approval of the drug for the indication being sought—treatment of schizophrenia. FDA reiterated this position on another occasion prior to the NDA submission. FDA did not consider the results of this trial to provide primary evidence to support its approval decision. The agency ultimately approved the drug based on evidence that the drug was superior to placebo as demonstrated in several other trials. In another case, a sponsor conducted the non-inferiority trial outside of the U.S. and had not requested FDA’s input while planning or conducting the trial. The sponsor requested a meeting with FDA to discuss its planned NDA. During this meeting, which occurred 1 month before FDA received the NDA, the agency learned of the sponsor’s non-inferiority trial and communicated its concerns regarding the design of the trial. FDA did not consider the results of this non-inferiority trial in its approval decision, but ultimately approved the drug based on evidence of superiority to placebo as demonstrated in another trial. In March 2010, FDA issued draft guidance on non-inferiority trials that provides detailed recommendations on using these trials to provide evidence of a new drug’s effectiveness. This March 2010 draft guidance offers broader and more comprehensive information to supplement other indication-specific guidance documents the agency previously issued. In March 2010, FDA issued new draft guidance on non-inferiority trials that provides detailed recommendations on how these trials may be used to establish the effectiveness of new drugs. Although FDA had previously issued guidance documents that included information regarding the use of non-inferiority trials for certain indications, this March 2010 guidance is the first focused solely on the use of non-inferiority trials. It explains the key principles involved in using a non-inferiority trial to demonstrate the effectiveness of a drug and provides detailed recommendations for such trials, including how to select an active control and how to set the non- inferiority margin (that is, determining the maximum clinically acceptable extent to which the new drug can be less effective than the active control), among other things. The March 2010 guidance also explains why the agency considers its recommendations appropriate, offers answers to frequently asked questions, and lists detailed examples to illustrate some common challenges in designing and interpreting non-inferiority trials. FDA officials told us that they developed the March 2010 guidance on non- inferiority trials because it was clear to them that these trials were not well understood. The concepts elaborated on in the March 2010 guidance are not new, however. They have been part of FDA’s considerations since at least 1985 when the agency substantially revised NDA regulations to include a provision describing the characteristics of adequate and well- controlled trials. These concepts have also been addressed, in part, in other agency guidance documents. However, FDA officials saw the need for more detailed guidance as they noticed many errors, especially related to the selection of a non-inferiority margin, in sponsors’ execution of these trials. FDA officials also expect that the use of non-inferiority trials will rise as more drugs become available to prevent death or serious illness and the use of placebos may become unethical. FDA’s March 2010 guidance explains when non-inferiority trials may be used to establish a drug’s effectiveness. The guidance states that these trials are generally used when an available treatment is known to provide an important benefit—for example, the prevention of death or irreversible harm. In these cases, it would be considered unethical to use a placebo in a clinical trial. The guidance also states that non-inferiority trials may only be used when they are capable of measuring the effect of the new drug in the study—that is, when the active control is able to consistently demonstrate its expected effect in the non-inferiority trial. FDA’s March 2010 guidance explains that non-inferiority trials may not be able to demonstrate the effectiveness of drugs treating certain indications because not all drugs have a consistent effect in treating these indications. The guidance also offers suggestions for other types of trials that may be useful in demonstrating a drug’s effectiveness in cases where a non-inferiority trial is unable to provide evidence of effectiveness. The March 2010 guidance provides detailed recommendations on how to select an active control. For example, when more than one potential active control exists, the guidance recommends that the most effective drug be chosen as the active control. In addition, the frequently asked questions section also clarifies that the active control does not need to be FDA- approved for the indication. However, FDA officials we interviewed stated that active controls used in non-inferiority trials are usually FDA- approved. If the active control is not FDA-approved, FDA asks sponsors to provide evidence of the active control’s effectiveness. FDA’s March 2010 guidance also offers detailed advice on a range of other topics related to the use and interpretation of non-inferiority trials. For example, it suggests two methodologies that can be used to set the margin, offers step-by-step instructions on how to use each of these approaches, and addresses the role of clinical judgment in determining the margin. It also explains how to adjust the margin to account for some of the uncertainties related to non-inferiority trials, such as differences between the planned non-inferiority trial and prior trials that measured the effectiveness of the active control. The guidance offers advice on how to determine the proper number and type of patients to enroll in the trial, and how to select an endpoint. For example, the guidance states that the endpoint should be “one for which there is a good basis for knowing the effect of the active control.” Most of the experts we interviewed who reviewed FDA’s March 2010 guidance told us that they thought the recommendations it included were clear and detailed, and addressed the key principles involved in conducting non-inferiority trials. Some experts noted that the guidance’s frequently asked questions and examples were useful in illustrating the key principles described in the document, and said that FDA’s recommendations would help sponsors appropriately use these trials to prove a drug’s effectiveness. While experts we interviewed who reviewed FDA’s March 2010 guidance noted that it addressed key principles, most identified additional technical issues that they would have liked this guidance to have addressed. For example, the March 2010 guidance does not address how the use of a surrogate endpoint impacts the design and interpretation of a non- inferiority trial. FDA officials told us that the guidance applies to non- inferiority trials that use surrogate endpoints. However, some experts we interviewed noted that such trials are difficult to design and interpret; therefore, additional guidance on this topic may be helpful. Since the non- inferiority margin represents the maximum clinically acceptable extent to which the new drug can be less effective than the active control, experts told us that sponsors would need to translate the drug’s effect on a surrogate endpoint into its expected effect on a clinical endpoint in order to calculate the non-inferiority margin and interpret the trials’ results. Some experts also noted that the guidance does not include enough detailed instructions on how to estimate the effect of the active control in the non-inferiority trial. Finally, some experts who reviewed FDA’s March 2010 guidance told us that they wished the guidance more emphatically stated that non-inferiority trials should only be used as a last resort when seeking drug approval. FDA’s March 2010 draft guidance provides broader and more comprehensive information about the use of non-inferiority trials, supplementing other indication-specific guidance documents the agency had already issued. The objective and content of these two types of guidance documents differ. The March 2010 guidance offers comprehensive information on one topic, non-inferiority trials, that may be generally applied for all drugs using these trials. In contrast, FDA’s indication-specific guidance documents present recommendations on many topics—including trial design—for consideration in developing drugs to treat a particular indication or set of indications. Some of these indication-specific documents provide recommendations on how to use non-inferiority trials for that particular indication; for example, by suggesting a specific margin or a specific endpoint. However, unlike FDA’s March 2010 guidance, not all indication-specific guidance documents include information on all of the key principles involved in using a non- inferiority trial to establish a drug’s effectiveness. In addition, these indication-specific guidance documents do not include the same level of detail on the key principles that is in the March 2010 guidance. For example, several of FDA’s indication-specific guidance documents state that sponsors should justify their selection of non-inferiority margins in their NDAs. However, in these documents FDA does not elaborate on the methods sponsors could use to select or justify the margins. In contrast, the March 2010 non-inferiority guidance provides detailed instructions on how to calculate the margin. FDA’s indication-specific guidance documents provide sponsors with additional clarity on when non-inferiority trials may be used to establish the effectiveness of drugs treating a particular indication. From January 2002 through June 2010, FDA issued 17 guidance documents that state the agency’s position regarding the use of non-inferiority trials in demonstrating the effectiveness of drugs treating certain indications. In these indication-specific guidance documents, FDA stated that non- inferiority trials may be able to demonstrate the effectiveness of drugs treating eight indications, including those for HIV, cancer, diabetes mellitus, and certain severe infections. During the same period, FDA also issued nine indication-specific guidance documents which state that non- inferiority trials may not be able to demonstrate the effectiveness of drugs treating other indications—including some less severe infections such as sinusitis and acute bacterial otitis media (ear infections)—because the agency has been unable to identify available drugs that have a consistent effect and could serve as active controls in non-inferiority trials. (See table 3.) Appendix II identifies the guidance documents FDA has issued with information on the use of non-inferiority trials from January 2002 through June 2010, including indication-specific documents as well as the March 2010 draft guidance on non-inferiority trials. Our review of FDA’s indication-specific guidance showed that the agency has become more conservative in allowing evidence from non-inferiority trials to demonstrate the effectiveness of new drugs. First, FDA has revised its view regarding when non-inferiority trials may be used. Prior to 2007, for example, FDA had approved drugs treating several less severe infections—including acute bacterial sinusitis, acute bacterial otitis media, and acute bacterial exacerbations of chronic bronchitis—on the basis of evidence from non-inferiority trials. Experts we interviewed noted that these infections can often be resolved without treatment—and thus it is difficult to estimate the effect that an active control drug would have in a non-inferiority trial. In 2007 and 2008, FDA issued several guidance documents stating that non-inferiority trials may not be able to demonstrate the effectiveness of drugs treating these indications. Second, FDA has become more rigorous in its review of evidence from non- inferiority trials. For example, prior to 2001, FDA’s guidance on the development of anti-infective drugs had not advised sponsors to scientifically calculate or justify their selected non-inferiority margins—a step that FDA’s March 2010 guidance recommends. We provided a draft of this report to HHS for review. We received technical comments from HHS, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Commissioner of FDA and appropriate congressional committees. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The Food and Drug Administration (FDA) approved 18 new drug applications (NDA) that were submitted from fiscal year 2002 through fiscal year 2009 on the basis of evidence from non-inferiority trials. The majority of these were antimicrobial drugs, such as those that treat bacterial, viral, and fungal infections. (See table 4.) From January 2002 through June 2010, FDA issued 17 indication-specific guidance documents that included information about non-inferiority trials, and one guidance document that included broad recommendations regarding the use of non-inferiority trials. In addition to the contact named above, Geri Redican-Bigott, Assistant Director; Kathleen Diamond; Carolyn Garvey; Cathy Hamann; Julian Klazkin; Kaitlin McConnell; and Patricia Roy made key contributions to this report.
Before approving a new drug, the Food and Drug Administration (FDA)--an agency of the Department of Health and Human Services (HHS)--assesses a drug's effectiveness. To do so, it examines information contained in a new drug application (NDA), including data from clinical trials in humans. Several types of trials may be used to gather this evidence. For example, superiority trials may show that a new drug is more effective than an active control--a drug known to be effective. Non-inferiority trials aim to demonstrate that the difference between the effectiveness of a new drug and an active control is small--small enough to show that the new drug is also effective. Drugs approved on this basis may provide important benefits, such as improved safety. Because non-inferiority trials are difficult to design and interpret, they have received attention within the research community and FDA. FDA has issued guidance on these trials. GAO was asked to examine FDA's use of non-inferiority trial evidence. This report (1) identifies NDAs for new molecular entities--potentially innovative new drugs not FDA-approved in any form--that included evidence from non-inferiority trials, (2) examines the characteristics of these trials, and (3) describes FDA's guidance on these trials. GAO reviewed NDAs submitted to FDA between fiscal year 2002 (the first full year that FDA documentation was available electronically) and fiscal year 2009 (the last full year of submissions), examined FDA's guidance, and interviewed agency officials. Evidence from non-inferiority trials was included in about one-quarter, or 43, of the 175 NDAs for new molecular entities that were submitted to FDA for review from fiscal years 2002 through 2009. Many of these applications were for antimicrobial drugs, such as those treating bacterial, viral, and fungal infections. As of December 31, 2009, FDA approved 18 of the 43 NDAs on the basis of evidence from non-inferiority trials. Of the remaining 25 NDAs, FDA approved 11 based on other evidence, such as proof that the new drug was more effective than a placebo (no treatment), and decided not to approve 14. The non-inferiority trials included in these NDAs varied with respect to their characteristics. FDA generally requires sponsors to provide evidence of a drug's effectiveness as shown in more than one trial. For the 18 NDAs that were approved based on evidence from non-inferiority trials, the number of non-inferiority trials used to provide primary support for approval ranged from one to four, with an average of 2 such trials per NDA. Half of these applications included non-inferiority trials that tested the effectiveness of the new drug against more than one active control. The non-inferiority margins--the maximum clinically acceptable extent to which the new drug can be less effective than the active control and still show evidence of an effect--ranged from 5 to 20 percent among trials that supported approval. Among the other 25, FDA identified nine NDAs that included poorly designed non-inferiority trials which did not provide primary evidence for approval. Some of these problems included an inappropriate selection of an active control and an improper calculation of a non-inferiority margin. FDA notified sponsors of its concerns with the poorly designed trials prior to the sponsors' submissions of all NDAs that included such trials. In March 2010 FDA issued draft guidance which focused solely on the use of non-inferiority trials. This guidance presents detailed and comprehensive recommendations on how non-inferiority trials may be used to provide evidence of a drug's effectiveness. For example, it provides advice on how to select an active control and how to set the non-inferiority margin, as well as how to interpret the trials. This guidance offers broad, generally applicable recommendations to supplement indication-specific guidance documents that FDA had previously issued. These indication-specific guidance documents include FDA's advice on many issues related to the development of drugs for particular indications, some of which are related to the use of non-inferiority trials. GAO's review of FDA's guidance showed that the agency has become more conservative in allowing evidence from non-inferiority trials to demonstrate a drug's effectiveness. First, FDA has limited the indications for which these trials may be used. Second, the agency has also become more rigorous in its review of evidence from non-inferiority trials. We sent a draft of this report to HHS for review. HHS provided us with technical comments, which we incorporated as appropriate.
The various types of fees and other collections received by federal agencies are, in general, governed by two authorities—an authority to charge a fee and an authority to retain and obligate fee collections. An agency’s authority to charge fees or establish other collections is derived either from the general statutory authority to assess user charges pursuant to the Independent Offices Appropriation Act of 1952 (IOAA) or from a statutory provision authorizing or directing an agency to establish a particular fee or other collection. IOAA provides that, in general, each service or thing of value provided by an agency to a person is to be self- sustaining to the extent possible, and provides the head of each agency with authority to prescribe regulations to establish the charge for a service or thing of value provided by the agency. Fees assessed under IOAA must be (1) fair and (2) based on costs to the government, the value of the service or thing to the recipient, public policy or interest serviced, and other relevant facts. Without additional statutory authority to retain fee collections, however, such collections are deposited as miscellaneous receipts in the U.S. Treasury and are not available to the agency collecting the fees. OMB Circular A-25 establishes federal policy regarding fees assessed in accordance with IOAA and provides guidance for agency implementation of charges and disposition of such collections. More specifically, agencies must apply the provisions of OMB Circular A-25 to fees assessed pursuant to IOAA. For fees assessed pursuant to another statutory authority, OMB Circular A-25 provides guidance to agencies that is intended to be applied only to the extent permitted by law and to the extent it is not inconsistent with a controlling statute. In many instances, agencies receive specific authority through authorizing or appropriations legislation to collect fees and retain and obligate the collections. Such legislation may establish a specified rate or amount to be assessed, how the fee is to be calculated, the method and timing of collection, the authorized purposes for which fee collections may be used, and the degree of flexibility an agency has to set and revise fee rates through the regulatory process. Regardless of whether a fee program is established under the authority of IOAA or some other statute, we have reported on the benefits agencies could realize by applying the principles of OMB Circular A-25 to their programs receiving fees and other collections. Congress—through the authorization and appropriations processes— oversees DHS funding typically at the account level and may, through enacted legislation, specify the purpose for which appropriated funds may be used, restrict the amount or purpose for which the funds can be used, and require that an agency report on activities conducted at the account or program level. For each program receiving fees or other collections, DHS or its components must be provided with authority to (1) conduct the activity for which the fee or other collection is authorized, (2) collect the fee or other collection at authorized levels, and (3) obligate and expend the funds collected. Furthermore, the degree of flexibility a DHS component has with respect to managing its programs receiving fees and other collections depends upon the statutory authority upon which the fee or other collection is based. For example, in cases where a statute prescribes a specific amount the agency is to charge, component officials may lack the flexibility to adjust the amount charged through the regulatory process and instead must submit a legislative proposal seeking statutory changes if it is determined that circumstances warrant an adjustment. DHS OCFO has responsibility to oversee the department’s budget formulation process in order to ensure, among other things, that DHS resources from fees and other collections are used and managed in accordance with applicable laws and policies. To this end, DHS OCFO is to ensure components comply with the biennial reporting requirements and other applicable provisions of the CFO Act and OMB Circular A-25. For example, the CFO Act provides that, among other things, an agency’s CFO is to review the fees, royalties, rents, and other charges imposed by the agency for services and things of value it provides on a biennial basis, and make recommendations on revising those charges to reflect costs incurred by the agency. In addition, DHS OCFO periodically conducts department-wide reviews of its portfolio of programs receiving fees and other collections to identify ways to minimize the effects of increasing fiscal constraints on DHS’s mission and reduce reliance on annual appropriations. DHS OCFO has completed two such reviews. In fiscal year 2012, DHS OCFO conducted a Fee Structure Review that considered whether opportunities existed to increase flexibility in the discretionary budget in fiscal years 2014 through 2018 from both existing programs receiving fees and other collections and prospective new fees. The study focused on the amount of costs recovered by programs from collections for activities that are funded jointly by collections and annual appropriations for fiscal years 2014 through 2018. In March 2015, DHS completed the 2014-2015 User Fee Winter Study (Winter Study). According to DHS OCFO officials, this study was initiated as part of the DHS Secretary’s “Unity of Effort” initiative, and the results of the Winter Study were expected to inform DHS’s fiscal year 2017 budget formulation process and educate internal DHS stakeholders on DHS’s portfolio of programs receiving fees and other collections. According to the Winter Study Terms of Reference, the purpose of the Winter Study was to provide DHS with an opportunity to consider broadly the use of user fees and similar financing alternatives—such as fines and trust funds—across DHS to examine how the use of these fees and other collections further key policy objectives and whether there is a need for greater cohesion in the management, budgeting, and oversight of user fees. Specifically, the goals of the study were to consider how to (1) best leverage resources from its fees and other collections across DHS; (2) examine how the use of fees and other collections further key policy objectives; and (3) strengthen management, budgeting, and oversight of user fees and other collections. According to DHS officials, to achieve these goals, representatives from each DHS component met on an ad hoc basis as part of the Winter Study working group. More specifically, the Winter Study sought to identify the current amounts of user fees and other collections and respective legal authorities, the intended degree of cost recovery, the history of previous requests for fee adjustments, and best practices among components. Findings and recommendations from the Fee Structure Review and Winter Study are discussed later in this report. DHS components have responsibility for the collection and obligation of fees and other collections in accordance with applicable laws and policies. CBP and the Federal Emergency Management Agency (FEMA) collected more than half of the approximately $15 billion DHS received from fees and other collections in fiscal year 2014, as shown in figure 1, with the Transportation Security Administration (TSA), U.S. Citizenship and Immigration Services (USCIS), and NPPD programs also collecting in excess of a billion dollars. Appendix I provides details on each of the 38 programs receiving fees and other collections, including general authorities, amounts collected in fiscal year 2014, and descriptions of the programs and purposes for which collections may be used. Our analysis of DHS collections and cost data showed that 14 of the 38 programs receiving fees and other collections in fiscal year 2014 collected amounts that fully covered identified program costs. Of the remaining 24 programs, collections for 20 programs partially covered identified program costs, and DHS did not provide cost data, or we determined such data may not be reliable, for 4 programs. For the 14 programs with full cost recovery, collections exceeded identified program costs by approximately $1.4 billion, and DHS did not rely on annual appropriations to cover any program costs. The $1.4 billion in collections that exceeded program costs was maintained in several ways, such as unobligated carryover balances, maintained in reserve funds, or deposited to the Treasury in accordance with applicable laws. For the 20 programs that had identified program costs exceeding collections in fiscal year 2014, costs exceeded collections by an estimated $6 billion. (See app. II.) A fee or collection may be assessed at a rate that either partially or fully recovers costs from the user, or it may be assessed according to some other basis, such as market value of the benefit provided. If a fee or collection is set at a rate that does not achieve full-cost recovery, the difference is generally funded through amounts received in an agency’s annual appropriations acts. According to component documentation, annual appropriations covered about 97 percent of the estimated $6 billion difference, unobligated carryover balances covered about 3 percent of the estimated difference, and transfers of collections from one fee program to another fee program covered less than 1 percent of the estimated difference in fiscal year 2014. For the 4 remaining programs, either DHS component officials cited reasons they could not provide cost data or we determined such data may not be reliable. For example, a CBP official stated that CBP’s activity based costing model does not capture costs for the Harbor Maintenance Fee, which is administered by another agency, and the Merchandise Processing Fee, from which CBP deposits collections in the general fund to offset CBP’s salaries and expenses appropriation. In addition, CBP does not collect cost or volume data at the level of detail needed to fully identify the costs associated with the 78 specific fees included under the Miscellaneous Fees Collections account that collectively constitute less than 1 percent of CBP’s total collections as reported in its biennial review. For the fourth remaining program—TSA’s Aviation Security Infrastructure Fee (Air Carrier Fee)—TSA reported that the statutes governing the Air Carrier Fee and the Passenger Civil Aviation Security Service Fee specify that collections from these programs be used to offset the authorized costs of providing civil aviation security services and, hence, TSA only tracks the costs of these programs in aggregate. Of the 20 programs with costs exceeding collections, DHS (or the U.S. Department of Agriculture for one program) initiated actions intended to increase cost recovery for 6 fee programs, comprising about 85 percent of the estimated $6 billion difference in fiscal year 2014. One of the programs—TSA’s Passenger Civil Aviation Security Service Fee— accounted for nearly 76 percent of the difference. Most recently, DHS submitted legislative proposals with its fiscal year 2017 budget submission to increase fees for three programs that support the provision of civil aviation security services (the Passenger Civil Aviation Security Service Fee), customs-related inspections (the Consolidated Omnibus Budget and Reconciliation Act Fee), and inspection and detention services at air and sea ports of entry (the Immigration Inspection User Fee ). DHS reported that the proposals to increase collections for these programs would accomplish such things as reduce reliance on annual appropriations, fund additional CBP officers, and contribute to deficit reduction as a portion of collections would go directly to the general fund of the Treasury. Additionally, DHS pursued regulatory adjustments to one program that provides for the registration of commercial vessels (Coast Guard’s Commercial Vessel Documentation Fee) and the Department of Agriculture amended the regulation for one program through which CBP conducts inspectional activities for international arrival of passengers, conveyances, animals, plants, and agricultural goods at port of entry (AQI Fee). In addition, at DHS’s request, OMB authorized a rate adjustment for one program that provides law enforcement services on federally controlled property (FPS’s Basic Security and Oversight Fees). See table 1 for the status of actions taken by DHS or the U.S. Department of Agriculture to increase cost recovery or contribute to the reduction of the federal deficit for the 6 fee programs as of April 2016. DHS components have plans to address cost recovery issues for 4 additional fee programs, according to component officials. Specifically, TSA has plans to amend regulations to increase fees for two fee programs—the Security Threat Assessments for Hazmat Drivers, and the Commercial Aviation and Airport Fee and Other Security Threat Assessment Fee—as it seeks to harmonize the entire set of TSA’s vetting and credentialing fees, which includes seven different programs, by fiscal year 2018, according to TSA officials and documentation. In addition, ICE officials told us they plan to submit a legislative proposal to increase spending authority to cover program costs for the Student and Exchange Visitor Program (SEVP) in fiscal year 2017. According to ICE documentation, the proposed increase is necessary to fund the costs of future mission requirements and to invest in modernization initiatives for the program, among other things. Finally, USCIS issued a proposed rule to adjust most fees within the Immigration Examinations Fee Account to address the difference between costs and collections. DHS component officials did not document the analyses and processes they used to manage differences between identified program costs and collections for the remaining 10 programs receiving fees and other collections or document decisions related to cost recovery. For 6 of the 10 programs, component officials had identified deficiencies related to the difference between program costs and collections and made recommendations to address them in fee studies and biennial fee reviews, but did not document the reasons that they did not pursue the recommended actions. For the remaining 4 programs, component officials said that they did not document reasons for not addressing the differences because they did not consider it required or necessary. Reasons cited for not documenting the processes for managing and making decisions on how to address the estimated $726 million difference included that some fees and other collections are set in statute and not intended to cover full program cost and that some fees and other collections had other funds available to cover the difference between identified program costs and collections in fiscal year 2014 (see table 2). While we were able to determine—through a review of data, DHS fee studies, biennial fee reviews, and through interviews—why components decided not to take actions to address differences in collections and program costs for these 10 programs, components have not documented the processes, analyses, and resulting decisions in a way that would enable systematic oversight of these decisions or inform management in their decision making. For example, CBP officials stated that the component has an informal decision-making process, with a goal to prioritize action to increase cost recovery for the three fee programs with the highest volume of fee collections across its portfolio of fees and other collections (AQI, COBRA fees, and IUF). However, CBP has not documented its decision-making process, making it difficult for stakeholders to determine why action was initiated for some fee and collection programs and not others such as the Land Border Inspection Fee, which is a fee that while small relative to the rest of CBP’s fee program portfolio, comprised nearly 10 percent—or $645 million—of the difference in identified program costs over collections for the overall DHS fee program portfolio in fiscal year 2014. DHS has emphasized the importance of documenting processes and analysis to inform decision makers and achieve agency goals in the DHS Fiscal years 2014-2018 Strategic Plan. Specifically, the strategic plan states that DHS’s Unity of Effort initiative to integrate DHS organizations can be achieved through documenting processes and analysis to provide transparency and relevant information to DHS decision makers. Standard practices for project management also support this practice to better ensure that programs are operating efficiently and effectively. In addition, Standards for Internal Control in the Federal Government calls for agencies to help ensure transparency and accountability over agency resource decisions by clearly documenting significant events—such as decisions for addressing differences in collections and program costs—in a form readily available for examination. Component officials said that DHS OCFO had not provided requirements or guidance to document the processes, analyses, and decisions regarding the management of fee and other collection programs. DHS OCFO officials said that they do not provide such guidance because they have delegated fee management and oversight responsibilities to component officials. Without documentation, transparency is lacking regarding whether component decisions not to address differences in program collections and costs are reasonable and appropriate, particularly where DHS or its components have identified and reported deficiencies and recommended actions to address them. Further, DHS OCFO may lack complete information to determine why components initiated actions for some fee programs set in statute, but not others—or to assess how decisions for managing individual component program portfolios align with effective practices for managing the overall DHS portfolio. DHS component officials said that they have established targets for a minimum level of unobligated balance to carry over from one fiscal year to the next for most programs. Specifically, these officials said that they established such targets for 21 of 25 programs with unobligated balances carried over to fiscal year 2014 based on historical trends in collections and projected program costs. Component officials stated that for most programs, these targets are set at levels to sustain a program’s operations for the first quarter of the succeeding fiscal year; with some components adjusting targets based on differences in administering fees and other collections, spending authority, and statutory limitations. DHS component officials did not identify targets for minimum unobligated carryover balances for the remaining four programs that were funded by collections from insurance premiums or reimbursable agreements. Specifically, FEMA officials said that such targets were not necessary for the National Flood Insurance Fund as FEMA has borrowing authority to cover the difference between the costs of program operations and collections from insurance premiums. FPS officials said that they had not established such targets for its three programs—Basic Security and Oversight, Building Specific, and Reimbursable Agency Specific— because officials lacked the data and cost models to do so for these program operations funded by reimbursable agreements between FPS and other federal agencies. DHS OCFO officials stated that responsibility for managing unobligated carryover balances is delegated to components as, according to officials, component management is in the best position to determine the appropriate amount of unobligated carryover balance needed to ensure efficient program operations. Our analysis of DHS OCFO data showed that DHS components carried over unobligated balances totaling $2.6 billion from fiscal year 2014 to fiscal year 2015 across the 25 fees and other collections. (See app. III.) Our analysis comparing amounts of unobligated carryover balances to agency criteria showed that components generally met minimum targets set to sustain program operations or relied on other mitigation strategies. Specifically, our comparison of unobligated carryover balances from fiscal year 2013 and amounts obligated in the first quarter of fiscal year 2014 showed that components carried over unobligated balances sufficient to ensure continuity of operations for 19 of the 21 fee and other collections programs that had targets for minimum unobligated carryover balances in place, and CBP officials cited other mitigation strategies to sustain operations for the remaining two programs. Unobligated carryover balances for CBP’s COBRA and Land Border Inspection Fee programs did not cover about 48 percent (approximately $31 million) and about 70 percent (approximately $5 million), respectively, of first quarter fiscal year 2014 obligations. However, CBP officials said that they did not rely on unobligated carryover balances to sustain operations for these programs as the reimbursable structure of its COBRA and Land Border Inspection Fees allows CBP to address funding shortages through the use of CBP’s annual appropriations, as available. DHS components have taken some steps to manage potential excess unobligated carryover balances, but have not established targets for the maximum level of unobligated carryover balance or a process that uses these targets to ensure efficient use of funds. DHS component officials had identified actions to manage excess unobligated carryover balances for seven programs that have grown—or have the potential to grow beyond levels these officials deemed necessary to ensure efficient program operations. Component officials cited actions underway to manage excess unobligated carryover balances by, among other things, redirecting fee resources, adjusting fee rates, and submitting proposals to increase spending limits. However, the processes established by DHS components to manage unobligated carryover balances do not include reasonable and appropriate targets for these excess balances. We have previously reported that it is important for agencies to assess reserves for reasonableness, set clear goals—such as maximum reserve levels—and clarify how the reserve will be implemented to help ensure agency accountability and transparency. DHS component officials stated that targets for maximum unobligated carryover balances have not been established for their respective programs because the establishment of such targets is complicated by factors component officials deemed beyond the agency’s control. They cited factors such as the level of unobligated balances for some programs being the result of rates and spending limits set in statute and annual fluctuations in program users and associated collections. However, actions have been taken in the past to address some of these factors by submitting legislative proposals to adjust rates and spending limits, and by developing models that project fluctuations in program use and collections. DHS OCFO said that it delegates the responsibility for managing fee and collection programs, including establishing an appropriate range of unobligated carryover balance, to the components as they are best positioned to understand the factors affecting the management of the programs. We have previously reported that agencies managing fee accounts should have a robust strategy to estimate and manage a carryover balance that assesses how effectively agencies anticipate program needs and ensure the most efficient use of resources. If an agency does not have a robust strategy in place to manage carryover balances, or is unable to adequately explain or support the reported carryover balance, then a more in-depth review is warranted as balances may rise to unnecessarily high levels, producing potential opportunities for those funds to be used more efficiently elsewhere. Lacking criteria for maximum levels of unobligated carryover that should be in place and documented processes for managing such balances, it is unclear whether steps taken by components to manage excess balances will be sufficient to ensure efficient program operations, as highlighted in the following examples. USCIS has not established targets for a maximum unobligated carryover balance to determine the extent that additional actions may be needed to reduce or redirect excess amounts included in the approximately $983 million in unobligated carryover balance in its IEFA as of the end of fiscal year 2014. As shown in figure 2, the $983 million balance was comprised of approximately $516 million derived from nonpremium processing collections used to fund program operations related to the processing of immigration benefit applications while the remaining $467 million was derived from premium processing collections used primarily to support USCIS’ Transformation initiative to move from manual to electronic processing systems. USCIS has taken actions to manage the growth in the unobligated carryover balance for the nonpremium processing fee, by using it to fund the increasing difference between identified program costs and fee collections. These actions have resulted in nonpremium balances declining to levels below the minimum target level of $750 million identified by USCIS to mitigate potential shortfalls in fee collections to cover program costs, as shown in figure 2. USCIS officials reported that they are in the process of developing an analytical methodology for determining an appropriate maximum level of carryover for any year given the cash flow, deferred revenue, and reserve fund considerations, but stated that they have been challenged to identify a maximum level because program funding requirements fluctuate with levels of pending caseload. In addition, USCIS issued a proposed rule in May 2016 to address the difference between costs and collections within IEFA, including most IEFA nonpremium fees. It is unclear, however, the extent that USCIS action will address the increasing growth of the premium processing side of IEFA. USCIS estimated that the unobligated carryover balance for the premium processing fee could continue to grow to $1.1 billion by fiscal year 2020, as fee collections are expected to exceed Transformation initiative funding requirements. Therefore, USCIS reported that it has begun to reduce the growing balance by expanding the use of these premium fee collections to fund onetime infrastructure improvements that support adjudication services other than Transformation, such as its Financial Systems Modernization project. USCIS estimated in its spending plan that expanding the use of premium processing fee collections will result in an unobligated carryover balance for premium processing of about $341 million by the end of fiscal year 2020. However, USCIS has not established a maximum target for the appropriate amount of unobligated carryover balance that should be maintained consistent with actions that could be taken under the spending plan and that ensure efficient use of funds. According to USCIS officials, the agency is currently implementing its fiscal year 2016 operating plan that discusses planned uses of premium processing collections, and is based on its assessment of projected collections, planned Transformation program requirements, and other appropriate infrastructure requirements. However, USCIS has not identified maximum targets for the unobligated carryover balance needed for both nonpremium processing and premium fees within the IEFA. Without such targets, USCIS may not be able to determine the extent that expanding uses for premium processing fee collections is sufficient to achieve appropriate balance within the premium processing program. NPPD’s FPS has not established targets to determine the extent that the approximately $193 million of unobligated carryover balance as of fiscal year end 2014 was appropriate to fund operations across its three collection programs. Our analysis of FPS data showed that the unobligated carryover balance for the Basic Security and Oversight Fees increased at a greater rate than identified program costs for each of the three collection programs from fiscal year 2011 through 2014, rising from a low of about 17 percent of identified program costs in fiscal year 2011 to a high of about 45 percent in fiscal year 2014. (See fig. 3.) According to FPS officials, the increase in unobligated carryover balances across its collection programs can be attributed to hiring delays and FPS’s decision to maintain spending at fiscal year 2011 levels to help fund enhance security operations and reserve requirements, such as surge-related activities and information technology investments that are going through the acquisition process. FPS reported in its Congressional Budget Justification for Fiscal Year 2017 that it is working on a sustainable revenue model whereby collections from other agency customers sustain the cost requirements of the same year, as reliance on unobligated carryover balances and recoveries is a short-term fix and not a sustainable long-term solution. In July 2015, FPS informed its customer agencies that it will increase the rates for its collection programs in fiscal year 2017 to, among other things, maintain its capacity to rapidly surge personnel to protect federal facilities during periods of heightened vulnerability. FPS has not determined at what point its unobligated carryover balance would be insufficient or continue to be in excess of need to address the projected growth in cost for program operations, surge activities, and long-term capital investment decisions. FPS officials said they are evaluating the recommendation made by an independent audit firm to maintain a minimum of 1 to 2 month operating reserve based on the firm’s analysis of FPS’s average cash flow; however, FPS has yet to determine whether this recommendation for the size of the program’s operating reserve is appropriate to meet its future operating needs given recent increases in the agency’s surge operations. Without evaluating the impact of the fee increase on its fee balances and establishing targets for both minimum and maximum unobligated carryover balances, stakeholders lack reasonable assurance that FPS is managing its resources to ensure that its carryover balances do not grow beyond levels necessary ensure efficient program operations, or fall below levels necessary to ensure continuity of program operations, meet reserve requirements for potential surge operations, and make effective capital investment decisions. CBP has not established a target for the maximum unobligated carryover balance necessary for its User Fee Facility program or taken action necessary to reduce the approximately $14 million balance as of fiscal year end 2014 within the collection program that, while relatively small, consistently constituted over 100 percent of the program’s operational costs each year from fiscal years 2010 through 2014. Our analysis of CBP data showed that the unobligated carryover balance amounted to about 160 percent of total identified program costs in fiscal year 2014, exceeding these program costs by approximately $7 million. Our analysis further showed that unobligated carryover balances within this fee program ranged from a high of over $17 million in fiscal year 2013 to a low of about $15 million in fiscal year 2012. (See fig. 4.) CBP officials stated that until recently, CBP did not have the system capacity to bill actual direct and indirect costs incurred at each User Fee Facility and that the rate charged to small airports may likely be inconsistent with the grade of the officer providing customs services. Beginning in fiscal year 2013, CBP began charging user fee facilities 15 percent for indirect costs per OMB recommendation for agencies that are unable to identify indirect costs. CBP officials stated that until recently, CBP did not have the system capacity to bill actual direct and indirect costs incurred at each User Fee Facility and that the rate charged to small airports may likely be inconsistent with the grade of the officer providing customs services. Beginning in fiscal year 2013, CBP began charging user fee facilities 15 percent for indirect costs per OMB recommendation for agencies that are unable to identify indirect costs. indirect costs for customs services, and statutory limitations on how CBP may use User Fee Facility collections. CBP has taken some action to address these causes of excess unobligated carryover balances in the program. In 2012, CBP began piloting a new module within its financial system that captures actual salary benefit and overtime costs for each user fee airport facility and bills actual expenses for reimbursement from the program on a monthly basis. Moreover, CBP officials said that CBP is in the process of identifying how existing policy, regulations, and MOAs with small airports need to be modified to implement the new billing module. CBP officials stated that, once fully implemented, the pilot system to bill based on actual expenses, as well as CBP efforts to revise MOAs to adjust charges for about 6 of the 50 facilities served under the User Fee Facility program, may reduce unobligated carryover balance over time, but did not document analysis showing a target balance or timeframe for completed action. CBP also has not taken action to assess other reasons for the excess unobligated carryover balance in the User Fee Facility program, including whether rates charged to facilities are too high and should be reduced. CBP has not commissioned a comprehensive study to analyze small airport operations, costs, and activities to determine how to better align the fee with cost, as recommended in its most recent biennial fee review. Rather, CBP officials stated that CBP will primarily rely on its new billing system to manage the excess carryover balance associated with the program. Moreover, CBP has not identified an appropriate maximum level of unobligated carryover, studied the potential impacts of its efforts to address the causes of the excess unobligated carryover balance, or determined the need to take further action to ensure proper fee alignment and efficient use of funds. Without evaluating the impact of actions taken to manage excess unobligated carryover balances, or whether additional actions are needed to align the fee rate with costs charged to the program, stakeholders lack reasonable assurance that CBP is managing unobligated carryover balances to ensure they do not continue to rise beyond levels necessary for efficient program operations. DHS OCFO distributes instructions to components for submitting the results of biennial reviews of their fee and other collections programs to DHS, but does not provide oversight to ensure that components conduct these reviews. Pursuant to the CFO Act and consistent with implementing guidance in OMB Circular A-25, an agency’s CFO is to review, on a biennial basis, the fees and other charges imposed by the agency for services and things of value it provides and make recommendations on revising the charges to reflect costs incurred in providing such services and things of value. In addition, federal programs are subject to Standards for Internal Control in the Federal Government which states that agencies should ensure that ongoing monitoring occurs during the course of normal operations to help evaluate program effectiveness. Our review of DHS and component records showed that while four components submitted results of reviews conducted for each of their respective fee and other collections programs, three components did not, as shown in table 3. Specifically, CBP, NPPD, TSA, and the Coast Guard reviewed and reported results for all of their collective 30 programs, but FEMA, ICE, and USCIS did not review 6 of the remaining 8 programs. FEMA, ICE, and USCIS officials cited three reasons for not conducting biennial reviews for their programs. Specifically, these officials stated that biennial review and reporting requirements in the CFO Act do not apply to programs with rates set in statute, to accounts that are too small relative to other programs to warrant resources spent on a review, and to programs that are not structured as traditional user fees. However, we have previously reported that consistent with OMB Circular A-25 and statements of OMB staff, agencies should review and report on any government service provided for which an agency receives revenue in accordance with the CFO Act, regardless of the relative size of the fee or whether rates are set in statute or by the agency through regulation. Further, some DHS components—such as CBP, have conducted biennial reviews for programs receiving collections that are not traditionally considered fees or other charges—such as Immigration Enforcement Fines. Such actions help ensure that decision makers have complete information about program costs and activities. For example, we have reported that the CFO Act’s biennial review provisions provide decision makers with comprehensive information necessary to support robust deliberations about fee financing. Absent oversight to ensure that components are conducting regular comprehensive reviews, agencies and Congress may not be aware of opportunities to, as appropriate, improve fee design and management processes and that, if left unaddressed, could contribute to inefficient use of government resources. For example, TSA officials stated that information from biennial fee reviews of the Passenger Civil Aviation Security fee—a fee with a rate set in statute—enabled TSA to inform congressional stakeholders of an increasing gap between fee collections and aviation security program costs. Information derived from biennial fee reviews may similarly inform congressional stakeholders and provide similar benefits to fee programs, regardless of whether the rate charged is set in statute. In addition, Coast Guard officials stated that its biennial fee reviews of the Commercial Vessel Documentation Fee program—a relatively small fee program collecting fees averaging about $2 million annually from fiscal years 2010 through 2014—are important because the Coast Guard is generally authorized to maintain funds for obligation only during the fiscal year in which they become available (1-year authority) and thus cannot carryover unobligated balances into subsequent fiscal years. As such, biennial fee reviews help the Coast Guard to ensure fee collections are sufficient to cover program costs, and provide information to relevant stakeholders about the need to adjust fee rates. Furthermore, we have previously concluded that a regular process of reviewing fee programs could reveal and help address challenges identified by agencies in a more timely and systematic manner. For example, FEMA reported that the Radiological Emergency Preparedness Program, a program for which FEMA does not conduct biennial fee reviews because it does not consider the program to be a traditional user fee, faced challenges accurately estimating costs, resulting in FEMA refunding nearly $14 million to Radiological Emergency Preparedness Program users from fiscal year 2013 through 2015. A regular review may have helped the agency identify the issue sooner and avoid having to issue refunds. Moreover, without regular comprehensive reviews, agencies and Congress may miss opportunities to improve fee design and management processes which, if left unaddressed, could contribute to inefficient use of government resources. With regard to reporting, our review of DHS’s CFO Act report—The Department of Homeland Security’s Agency Financial Report for Fiscal Year 2014 (Agency Financial Report)—showed that DHS OCFO did not report the extent to which all components are conducting such reviews or any proposals to address management and operational deficiencies identified by components, such as those relating to the adjustment of fee and other collection rates. OMB Circular A-25 provides that agencies are to discuss the results of biennial fee reviews and any resulting proposals, such as adjustments to fee rates, in the annual report submitted pursuant to the CFO Act. The Agency Financial Report did not include this information, and instead included a listing of DHS components and some of the programs they administer that receive fees and other collections. DHS OCFO officials stated that more detailed information on components’ biennial fee reviews was included in the quarterly user fee reports DHS submitted to Congress and referenced in the Agency Financial Report. In addition, DHS OCFO officials stated that duplicating this information in the Agency Financial Report would not have provided additional useful information to decision makers. However, the quarterly reports did not include information on any proposals to address management and operational deficiencies, and as of July 2015, DHS was no longer under direction to submit quarterly user fee reports. Additionally, our review showed that DHS did not discuss in any of these reports the six programs receiving fees and other collections for which reviews were not conducted. Our review of The Agency Financial Report for fiscal year 2015 also found that the report did not include proposals to address management and operational deficiencies or other information from reviews of these programs. DHS OCFO officials stated that the department has not determined how it will report on the results of biennial fee reviews and any resulting proposals to adjust fee and other collection rates in the future, and needs to seek guidance from OMB on how the department should report on biennial fee reviews in future agency financial reports. Without transparency of fee program operations provided in the Agency Financial Report, or by other means, Congress and other stakeholders lack reasonable assurance that DHS OCFO has complete information on management and operational deficiencies to ensure components are making informed decisions regarding the actions needed to address such deficiencies. DHS OCFO has not established a process to actively monitor the status of components’ efforts to address the management and operational deficiencies that have been identified across programs, such as those deficiencies relating to cost recovery and excess unobligated carryover balances. According to DHS OCFO officials, the oversight and monitoring of actions to address identified deficiencies is delegated to components because components are responsible for administering programs and are best positioned to understand the statutes governing the programs as well as the factors—such as changing economic conditions—affecting program implementation. However, we found that while components have recommended actions to address identified management and operational deficiencies, some components have not taken action to implement these recommended actions or otherwise addressed longstanding deficiencies. Specifically, our analysis of biennial fee reviews conducted by components since fiscal year 2014 showed that components recommended 48 actions for 20 identified deficiencies across 18 programs receiving fees and other collections. (See app. IV.) Components most often identified deficiencies related to aligning fee rates to recover a greater share of program costs. Specifically, 12 of the 20 deficiencies related to recovering greater shares of program cost through collections by adjusting rates and spending caps or establishing charges for additional services. Another 5 deficiencies involved existing rates that may not distribute costs among users in an equitable manner. Two deficiencies identified challenges related to managing unobligated carryover balances. The remaining deficiency identified a difference in program collections and costs that could be addressed by recognizing other revenue sources, such as available unobligated carryover balances. However, components have not taken action to address 9 of these 20 deficiencies. For example, components have not addressed 5 of the deficiencies related to cost recovery, resulting in a difference between identified program costs and collections of around $700 million for the related programs since these deficiencies were identified in fiscal year 2014. If left unaddressed, these deficiencies may lead to management and operational challenges, such as the inequitable distribution and inefficient use of funds. DHS OCFO identified the need for greater oversight of the DHS fee portfolio through its Winter Study. Specifically, the Winter Study found considerable variation across components relating to the development and budgeting of user fees and other collections, and recommended as first steps towards greater standardization and coordination (1) the establishment of a framework for developing fee proposals, and (2) a department-wide fee governance council comprising Chief Financial Officers from components responsible for collecting fees and other collections as well as representatives from DHS OCFO and DHS Office of General Counsel. In January 2016, DHS formally established the DHS Fee Governance Council for the purpose of advising and assisting the DHS OCFO in establishing a consistent program for the financial management functions, activities, and policies relating to fees across DHS. Moreover, in accordance with its charter, the Fee Governance Council is to establish a governance and oversight structure for fees and other collections across DHS, developing policy guidance on issues such as, how fees and other collections are established, updated, or changed, and how regular fee reviews are conducted. While the establishment of a department-wide fee council is a positive step, DHS has not determined whether DHS OCFO will use this venue to issue guidance to components or provide oversight to ensure appropriate actions are taken to address management and operational deficiencies, such as those relating to cost recovery and excess carryover. Standards for Internal Control in the Federal Government state that policies and procedures should provide reasonable assurance that ongoing monitoring and evaluation are institutionalized in an agency’s operations, and require that findings from audits and other reviews are promptly resolved. Further we have previously reported that evaluating and reporting on results is a key practice that can assist interagency efforts in identifying areas for policy and operational improvement. Without oversight of components’ decision making processes—including tracking and reporting on the status of recommendations to address deficiencies— DHS cannot provide stakeholders with reasonable assurance that the agency is actively managing its portfolio of fees and other collections to mitigate the impact of management and operational deficiencies. Enhancing DHS’s oversight of its component agencies’ actions to address identified deficiencies could ensure that deficiencies are addressed in a timely manner and help DHS determine whether widespread management challenges are causing deficiencies to not be addressed and whether additional guidance should be provided to address these challenges. The uncertain budgetary environment highlight the need for DHS and its components to effectively manage, use, and oversee the approximately $15 billion in collections by DHS components across 38 homeland security-related programs. DHS and components have taken steps for some programs to strengthen management and oversight, such as by adjusting fees and other collections to cover a higher proportion of identified program costs, establishing some targets for minimum levels of unobligated carryover balances, conducting fee reviews, and establishing a DHS Fee Governance Council. However, opportunities remain for DHS OCFO and components to improve the transparency and accountability for management decisions and processes across all programs in the DHS portfolio. For example, ensuring that for all fee and other collections programs DHS OCFO and components are (1) documenting decisions about whether or not to take action, as appropriate, to address differences between program costs and collections; (2) establishing targets for appropriate minimum and maximum unobligated carryover balances; (3) conducting fee reviews; and (4) tracking and reporting the status of recommended actions would allow the DHS Fee Governance Council, the DHS OCFO, or others to provide oversight and ensure that management practices and decisions are appropriate and effective to ensure continuity of operations and equity in amounts charged to users of program services. Further, regularly reviewing and reporting the results of fee and other collections programs to agency heads, OMB, and Congress could also enhance information available to the annual budget process and better inform decisions to adjust fees or aspects of program operations through changes in legislation or regulation. To ensure effective management and oversight of DHS programs receiving fees and other collections, we recommend that the Secretary of Homeland Security direct the DHS Chief Financial Officer to use some means, such as the DHS Fee Governance Council, to ensure that component management take the following actions for each fee and other collections program that they administer: document the processes and analyses for assessing and, as appropriate, for managing the difference between program costs and collections and document resulting decisions; establish processes for managing unobligated carryover balances, to include targets for minimum and maximum balances for programs that lack such processes and targets; conduct reviews to identify any management and operational deficiencies; and take action to track and report on management and operational deficiencies—including reasons supporting any decisions to not pursue recommended actions—identified in fee reviews or through other means. Further we recommend that the Secretary of Homeland Security direct the DHS Chief Financial Officer to discuss the results of biennial fee reviews and any resultant proposals in the annual Agency Financial Report, annual performance report, or other reporting mechanism, consistent with the CFO Act and OMB Circular A-25. We provided a draft of this report to DHS for review and comment. DHS provided written comments, which are reproduced in appendix V. In its comments, DHS concurred with the five recommendations and described actions under way or planned to address them. DHS also provided technical comments, which we incorporated as appropriate. DHS stated that through the DHS Fee Governance Council, chaired by the Deputy DHS Chief Financial Officer, guidance will be developed and disseminated to (1) document the processes and analyses for assessing and, as appropriate, for managing the difference between program costs and collections and document resulting decisions; (2) establish processes for managing unobligated carryover balances, to include targets for minimum and maximum balances for programs that lack such processes and targets; (3) conduct reviews to identify any management and operational deficiencies; and (4) take action to track and report on management and operational deficiencies—including reasons supporting any decisions to not pursue recommended actions—identified in fee reviews or through other means. DHS estimated that these actions would be completed by July 31, 2017. Once guidance is developed and disseminated to components, components take appropriate actions to implement this guidance, and tools are developed to measure and assess changes in fee balances, these actions should address the intent of our recommendations to ensure effective management of DHS fee programs. In regard to our second recommendation, however, DHS stated that GAO’s characterization of DHS components’ planning processes for managing carryover balances is not entirely accurate. Specifically, DHS stated that GAO characterizes these processes as being too heavily focused on ensuring continuity of program operations rather than efficient use of funds, when in fact components actively manage carryover balances to ensure effective use of program funds. DHS cited as an example USCIS’ fee account annual operating plan development process that is used to guide resource deployment to best achieve mission critical goals. Our report does not state that DHS components were too heavily focused on ensuring continuity of operations, only that components placed more focus in this area than managing efficient use of funds. In general, we found that while components had identified minimum balances for most programs and mitigation strategies for when balances may fall below these minimums, components had not identified maximum balances and mitigation strategies for when balances grow above these maximums. DHS stated that USCIS is taking additional actions to address our second recommendation by prototyping new tools to measure and assess fee account carryover balances, cash flow, and changes to fee balances. DHS estimated that this action would be completed by September 30, 2016. Regarding our fifth recommendation, that the DHS Chief Financial Officer discuss the results of biennial fee reviews and any resultant proposals in the annual Agency Financial Report, annual performance report, or other reporting mechanism, consistent with the CFO Act and OMB Circular A- 25, DHS concurred, stating that DHS is in the process of developing a consolidated tracking system for the results of biennial fee reviews and any resultant proposals. In addition, DHS stated that DHS OCFO’s Financial Management Division will ensure that the results of biennial fee reviews and any resultant proposals are discussed in the annual DHS Agency Financial Report. DHS estimated that these actions would be completed by July 31, 2017. These actions should address the intent of the recommendation and better position DHS to ensure the effective oversight of programs receiving fees and other collections. We are sending copies of this report to the Secretary of Homeland Security and the Office of Management and Budget, and appropriate congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (213) 830-1011 or vonaha@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix IV. In fiscal year 2014, the Department of Homeland Security (DHS) received fees and other collections totaling approximately $15 billion from 38 programs with an estimated $17 billion in identified program costs. For these 38 programs, table 4 describes the legal authorities, program descriptions and financial information in terms of total collections and identified program costs for fiscal year 2014. Immigration Inspection User Fees (IUF) Consolidated Omnibus Budget Reconciliations Act Fee (COBRA) Agricultural Quarantine Inspection User Fees Merchant Mariner Licensing Documentation Fee Basic Security and Oversight Fees Student and Exchange Visitor Program Fee Overseas Inspection and Examination Fee Commercial Aviation and Airport Fee and Other Security Threat Assessment Fees Enforcement and Removal Operations Fee 2 reviews, program costs identified by components may include, but are not limited to, the direct and indirect costs associated with specific activities or tasks, such as administrative costs, salaries and expenses, and inspection and screening services. We identified limitations to the cost data provided by components such as the inability to accurately estimate and report direct and indirect cost, and note these limitations in the body of our report. Our analysis of DHS data showed that DHS components had unobligated balances carried over from fiscal year 2014 to the beginning of fiscal year 2015 totaling $2.6 billion across 25 of the 38 programs receiving fees and other collections, as shown in table 6. DHS components identified 20 deficiencies across 18 programs receiving fees and other collections and recommended 48 actions across 23 programs in biennial fee reviews conducted in fiscal year 2012 or 2014. Our analysis showed that DHS components took action to address 11 of these 20 deficiencies through changes in agency regulation, proposed changes to legislation, or other actions as shown in table 7. In addition to the contact named above, Lacinda “Cindy” Ayers, Assistant Director, and Michelle Woods, Analyst-in-Charge, managed this review. Wendy Dye and Jesse Tow made significant contributions to the work. Dominick Dale, Roshni Dave, Lorraine Ettaro, Eric Hauswirth, James Kernen, Thomas F. Lombardi, Susan Murphy, Laurel Plume, Amanda Postiglione, and Amelia Shachoy also contributed to this report. Department of Homeland Security: Progress Made, but Work Remains in Strengthening Acquisition and Other Management Functions, GAO-16-507T. Washington, D.C.: March. 16, 2016. Federal User Fees: Key Considerations for Designing and Implementing Regulatory Fees, GAO-15-718. Washington, D.C.: September 16, 2015. High-Risk Series: An Update, GAO-15-290. Washington, D.C.: February 11, 2015. Budget Issues: Key Questions to Consider When Evaluating Balances in Federal Accounts, GAO-13-798. Washington, D.C.: September 30, 2013. Federal User Fees: Fee Design Options and Implications for Managing Revenue Instability. GAO-13-820. Washington, D.C.: September 30, 2013. Agricultural Quarantine Inspection Fees: Major Changes Needed to Align Fee Revenues with Program Costs. GAO-13-268. Washington, D.C.: March 1, 2013. Budget Issues: Better Fee Design Would Improve Federal Protective Service’s and Federal Agencies’ Planning and Budgeting for Security. GAO-11-492. Washington, D.C.: May 20, 2011. Federal User Fees: Additional Analyses and Timely Reviews Could Improve Immigration and Naturalization User Fee Design and USCIS Operations. GAO-09-180. Washington, D.C.: January 23, 2009. Federal User Fees: A Design Guide. GAO-08-386SP. Washington, D.C.: May 29, 2008. Federal User Fees: Substantive Reviews Needed to Align Port-Related Fees With the Programs They Support. GAO-08-321. Washington, D.C.: February 22, 2008. Federal User Fees: Key Aspects of International Air Passenger Inspection Fees Should Be Addressed Regardless of Whether Fees Are Consolidated. GAO-07-1131. Washington, D.C.: September 24, 2007. Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies, GAO-06-15. Washington, D.C.: October 21, 2005.
The uncertain budgetary environment highlights the need for DHS to effectively manage and oversee billions of dollars in fees and other collections from users of homeland security program services. Each DHS component is responsible for administering, managing, and reviewing their respective programs to ensure that, consistent with law and policy, rates charged to users of program services are set to collect amounts sufficient to recover program costs and ensure efficient operations, but not in excess of operational needs. GAO was asked to review DHS's management and oversight of these programs. This report examines the extent to which (1) DHS components receive fees and other collections to recover program costs and manage any differences, as appropriate; (2) DHS components have processes in place to manage unobligated balances; and (3) DHS ensures components review their programs and monitors component action to address any management and operational deficiencies. GAO analyzed DHS financial information for 38 programs receiving fees and other collections in fiscal year 2014, examined DHS fee reviews and study results, and interviewed agency officials. The Department of Homeland Security (DHS) received $15 billion in fees and other collections across 38 programs in fiscal year 2014 that help fund homeland security functions, such as the screening and inspection of persons and goods entering the United States. Our analysis of DHS collections and cost data showed that 14 of the 38 programs receiving fees and other collections in fiscal year 2014 collected amounts that fully covered identified program costs. Of the remaining 24 programs, collections for 20 programs partially covered identified program costs, and DHS did not provide cost data, or we determined such data may not be reliable, for 4 programs. DHS components have taken action to address the estimated $6 billion difference between collections and identified program costs, with 6 programs comprising about 85 percent of the difference. However, components did not document processes for managing differences and making decisions on how to address the estimated $726 million difference across the 10 remaining programs. Such documentation of processes and decisions could help improve transparency and accountability over cost recovery efforts. DHS components have processes in place to manage unobligated balances carried over across fiscal years for 25 programs, with such balances totaling $2.6 billion at fiscal year-end 2014. These processes generally focused on ensuring continuity of program operations rather than efficiently using funds. For example, while components established targets for minimum balances for 21 of these 25 programs, none of the components established processes and related maximum targets to manage excessive unobligated carryover balances. Establishing such management processes and targets for minimum and maximum balances would enable components to show that management actions will be sufficient and appropriate to ensure the efficient use of funds—such as the Immigration Examinations Fee Account, which had an approximately $983 million unobligated balance as of fiscal year end 2014, and the User Fee Facility program account for small airports which has an unobligated balance of $14 million that has exceeded 100 percent of total operating costs each year from fiscal year 2010 through fiscal year 2014. DHS does not ensure that all components review their programs or monitor component actions to address management and operational deficiencies identified in those reviews. GAO found that three of the seven DHS components that have fee or other collection programs did not conduct such reviews for 6 of their programs, and that components had not taken recommended actions to address 9 of 20 deficiencies identified through program reviews as of fiscal year-end 2014. Further, DHS did not report the extent to which components are conducting such reviews or any proposals to address identified management and operational deficiencies. DHS oversight to ensure that components complete these reviews and report the results for all programs would enable Congress and others to receive information necessary to better ensure that fee and other collection programs are operating effectively and efficiently. GAO recommends that DHS ensure components document processes for managing differences in collections and costs, establish balance targets, and conduct program reviews and address identified deficiencies. DHS concurred with the recommendations.
GAO is a key source of professional and objective information and analysis and, as such, plays a crucial role in supporting congressional decision making. For example, in fiscal year 2003, as in other years, the challenges that most urgently engaged the attention of the Congress helped define our priorities. Our work on issues such as the nation’s ongoing battle against terrorism, Social Security and Medicare reform, the implementation of major education legislation, human capital transformations at selected federal agencies, and the security of key government information systems all helped congressional members and their staffs to develop new federal policies and programs and oversee ongoing ones. Moreover, the Congress and the executive agencies took a wide range of actions in fiscal year 2003 to improve government operations, reduce costs, or better target budget authority based on GAO’s analyses and recommendations. In fiscal year 2003, GAO served the Congress and the American people by helping to identify steps to reduce improper payments and credit card fraud in government programs; restructure government and improve its processes and systems to maximize homeland security; prepare the financial markets to continue operations if terrorism update and strengthen government auditing standards; improve the administration of Medicare as it undergoes reform; encourage and help guide federal agency transformations; contribute to congressional oversight of the federal income tax system; identify human capital reforms needed at the Department of Defense, the Department of Homeland Security, and other federal agencies; raise the visibility of long-term financial commitments and imbalances in the federal budget; reduce security risks to information systems supporting the nation’s critical infrastructures; oversee programs to protect the health and safety of today’s workers; ensure the accountability of federal agencies through audits and serve as a model for other federal agencies by modernizing our approaches to managing and compensating our people. To ensure that we are well positioned to meet the Congress’s future needs, we update our 6-year strategic plan every 2 years, consulting extensively during the update with our clients in the Congress and with other experts (see app. I for our strategic plan framework). The following table summarizes selected performance measures and targets for fiscal years 1999 through 2005. Highlights of our fiscal year 2003 accomplishments and their impact on the American public are shown in the following sections. Many of the benefits produced by our work can be quantified as dollar savings for the federal government (financial benefits), while others cannot (other benefits). Both types of benefits resulted from our efforts to provide information to the Congress that helped (1) improve services to the public, (2) provide information that resulted in statutory or regulatory changes, and (3) improve core business processes and advance governmentwide management reforms. In fiscal year 2003, our work generated $35.4 billion in financial benefits— a $78 return on every dollar appropriated to GAO. The funds made available in response to our work may be used to reduce government expenditures or reallocated by the Congress to other priority areas. Nine accomplishments accounted for nearly $27.4 billion, or 77 percent, of our total financial benefits for fiscal year 2003. Six of these accomplishments totaled $25.1 billion. Table 2 lists selected major financial benefits in fiscal year 2003 and describes the work contributing to financial benefits over $500 million. Many of the benefits that flow to the American people from our work cannot be measured in dollar terms. During fiscal year 2003, we recorded a total of 1,043 other benefits—up from 607 in fiscal year 1999. As shown in appendix II, we documented instances where information we provided to the Congress resulted in statutory or regulatory changes, where federal agencies improved services to the public and where agencies improved core business processes or governmentwide reforms were advanced. These actions spanned the full spectrum of national issues, from securing information technology systems to improving the performance of state child welfare agencies. We helped improve services to the public by Strengthening the U.S. visa process as an antiterrorism tool. Our analysis of the U.S. visa-issuing process showed that the Department of State’s visa operations were more focused on preventing illegal immigrants from obtaining nonimmigrant visas than on detecting potential terrorists. We recommended that State reassess its policies, consular staffing procedures, and training program. State has taken steps to adjust its policies and regulations concerning the screening of visa applicants and its staffing and training for consular officers. Enhancing quality of care in nursing homes. In a series of reports and testimonies since 1998, we found that, too often, residents of nursing homes were being harmed and that programs to oversee nursing home quality of care at the Centers for Medicare and Medicaid Services were not fully effective in identifying and reducing such problems. In 2003, we found a decline in the proportion of nursing homes that harmed residents but made additional recommendations to further improve care. Making key contributions to homeland security. Drawing on an extensive body of completed and ongoing work, we identified specific vulnerabilities and areas for improvement to protect aviation and surface transportation, chemical facilities, sea and land ports, financial markets, and radioactive sealed sources. In response to our recommendations, the Congress and cognizant agencies have undertaken specific steps to improve infrastructure security and improve the assessment of vulnerabilities. Improving compliance with seafood safety regulations. We reported that when Food and Drug Administration (FDA) inspectors identified serious violations at seafood processing firms, it took FDA 73 days on average, well above its 15-day target. Based on our recommendations, FDA now issues warning letters in about 20 days. We helped to change laws in the following ways: We highlighted the National Smallpox Vaccination program volunteers’ concerns about losing income if they sustained injuries from an inoculation. As a result, the Smallpox Emergency Personnel Protection Act of 2003 (Pub. L. No. 108-20) provides benefits and other compensation to covered individuals injured in this way. We performed analyses that culminated in the enactment of the Postal Civil Service Retirement System Funding Reform Act of 2003 (Pub. L. No. 108-18), which reduced USPS’s pension costs by an average of $3 billion per year over the next 5 years. The Congress directed that the first 3 years of savings be used to reduce USPS’s debt and hold postage rates steady until fiscal 2006. We also helped to promote sound agency and governmentwide management by Encouraging and helping guide agency transformations. We highlighted federal entities whose missions and ways of doing business require modernized approaches, including the Postal Service and the Coast Guard. Among congressional actions taken to deal with modernization issues, the House Committee on Government Reform established a special panel on postal reform and oversight to work with the President’s Commission on the Postal Service on recommendations for comprehensive postal reform. Our recommendations to the Coast Guard led to better reporting by the Coast Guard and laid the foundation for key revisions the agency intended to make to its strategic plan. Helping to advance major information technology modernizations. Our work has helped to strengthen the management of the complex multibillion-dollar information technology modernization program at the Internal Revenue Service (IRS) to improve operations, promote better service, and reduce costs. For example, IRS implemented several of our recommendations to improve software acquisition, enterprise architecture definition and implementation, and risk management and to better balance the pace and scope of the program with IRS’s capacity to effectively manage it. Supporting controls over DOD’s credit cards. In a series of reports and testimonies beginning in 2001, we highlighted pervasive weaknesses in DOD’s overall credit card control environment, including the proliferation of credit cards and the lack of specific controls over its multibillion-dollar purchase and travel card programs. DOD has taken many actions to reduce its vulnerabilities in this area. While our primary focus is on improving government operations at the federal level, sometimes our work has an impact at the state and local levels. To the extent feasible, in conducting our audits and evaluations, we cooperate with state and local officials. At times, our work results will have local applications, and local officials will take advantage of our efforts. We are conducting a pilot to determine the feasibility of measuring the impact of our work on state and local governments. The following are examples we have collected during our pilot where our work is relevant for state and local government operations: Identity theft. Effective October 30, 1998, the Congress enacted the “Identity Theft and Assumption Deterrence Act of 1998” prohibiting the unlawful use of personal identifying information, such as names, Social Security numbers, and credit card numbers. GAO report GGD-98-100BR is mentioned prominently in the act’s legislative history. Subsequently, a majority of states have enacted identity theft laws. Sponsors of some of these state enactments—Alaska, Florida, Illinois, Michigan, Pennsylvania, and Texas—mentioned the federal law and/or our report. For example, in 1999, Texas enacted SB 46, which is modeled after the federal law. Justice officials said that enactment of state identity theft laws has multijurisdictional benefits to all levels of law enforcement— federal, state, and local. Pipeline safety. Our report GAO-RCED-00-128, Pipeline Safety: The Office of Pipeline Safety Is Changing How It Oversees the Pipeline Industry, found that the Department of Transportation’s Office of Pipeline Safety was reducing its reliance on states to help oversee the safety of interstate pipelines. The report stated that allowing states to participate in this oversight could improve pipeline safety. As a result, the Office of Pipeline Safety modified its Interstate Pipeline Oversight Program for 2001-2002 to allow greater opportunities for state participation. Temporary Assistance for Needy Families Grant Program. We reported on key national and state labor market statistics and changes in the levels of cash assistance and employment activities in five selected states. We also highlighted the fact that the five states had faced severe fiscal challenges and had used reserve funds to augment their spending above the amount of their annual Temporary Assistance for Needy Families block grant from the federal government. Issued to coincide with the start of each new Congress, our high-risk update lists government programs and functions in need of special attention or transformation to ensure that the federal government functions in the most economical, efficient, and effective manner possible. This is especially important in light of the nation’s large and growing long- term fiscal imbalance. Our latest report, released in January 2003, spotlights more than 20 troubled areas across government. Many of these areas involve essential government services, such as Medicare, housing programs, and postal service operations that directly affect the lives and well-being of the American people. Our high-risk program, which we began in 1990, includes five high-risk areas added in 2003: implementing and transforming the new Department of Homeland Security, modernizing federal disability programs, Pension Benefit Guaranty Corporation’s (PBGC) single-employer pension insurance program. In fiscal year 2003, we also removed the high-risk designation from two programs: the Social Security Administration’s Supplemental Security Income program, and Asset Forfeiture programs administered by the U.S. Departments of Justice and the Treasury. In fiscal 2003, we issued 208 reports and delivered 112 testimonies related to high-risk areas, and our related work resulted in financial benefits totaling almost $21 billion. Our sustained focus on high-risk problems also has helped the Congress enact a series of governmentwide reforms to strengthen financial management, improve information technology, and create a more results-oriented and accountable federal government. The President’s Management Agenda for reforming the federal government mirrors many of the management challenges and program risks that we have reported on in our performance and accountability series and high- risk updates, including a governmentwide initiative to focus on strategic management of human capital. Following GAO’s designation of federal real property as a high-risk issue, the Office of Management and Budget (OMB) has indicated its plans to add federal real property as a new program initiative under the President’s Management Agenda. OMB recently issued an executive order on federal real property that addresses many of GAO’s concerns, including the need to better emphasize the importance of government property to effective management. We have an ongoing dialog with OMB regarding the high-risk areas, and OMB is working with agency officials to address many of our high-risk areas. Some of these high-risk areas may require additional authorizing legislation as one element of addressing the problems. Our fiscal year 2003 high-risk list is shown in table 3. During fiscal year 2003 GAO executives testified at 189 congressional hearings—sometimes with very short notice—covering a wide range of complex issues. Testimony is one of our most important forms of communication with the Congress; the number of hearings at which we testify reflects, in part, the importance and value of our expertise and experience in various program areas and our assistance with congressional decision making. The following figure highlights, by GAO’s three external strategic goals for serving the Congress, examples of issues on which we testified during fiscal year 2003. While the vast majority of our products—97 percent—were completed on time for our congressional clients and customers in fiscal year 2003, we slightly missed our target of providing 98 percent of them on the promised day. We track the percentage of our products that are delivered on the day we agreed to with our clients because it is critical that our work be done on time for it to be used by policymakers. Though our 97 percent timeliness rate was a percentage point improvement over our fiscal year 2002 result, it was still a percentage point below our goal. As a result, we are taking steps to improve our performance in the future by encouraging matrix management practices among the teams supporting various strategic goals and identifying early those teams that need additional resources to ensure the timely delivery of their products to our clients. The results of our work were possible, in part, because of the changes we have made to maximize the value of GAO. With the Congress’s support, we have demonstrated that becoming world class does not require substantial staffing increases, but rather maximizing the efficient and effective use of the resources available to us. Since I came to GAO, we have developed a strategic plan, realigned our organizational structure and resources, and increased our outreach and service to our congressional clients. We have developed and revised a set of congressional protocols, developed agency and international protocols, and better refined our strategic and annual planning and reporting processes. We have worked with you to make changes in areas where we were facing longer-term challenges when I came to GAO, such as in the critical human capital, information technology, and physical security areas. We are grateful to the Congress for supporting our efforts through pending legislation that, if passed, would give us additional human capital flexibilities that will allow us, among other things, to move to an even more performance-based compensation system and help to better position GAO for the future. As part of our ongoing effort to ensure the quality of our work, this year a team of international auditors will perform a peer review of GAO’s performance audit work issued in calendar year 2004. We continued our policy of proactive outreach to our congressional clients, the press, and the public to enhance the visibility of our products. On a daily basis we compile and publish a list of our current reports. This feature has more than 18,000 subscribers, up 3,000 from last year. We also produced an update of our video on GAO, “Impact 2003.” Our external Web site continues to grow in popularity, having increased the number of hits in fiscal year 2003 to an average of 3.4 million per month, 1 million more per month than in fiscal year 2002. In addition, visitors to the site are downloading an average of 1.1 million files per month. As a result, demand for printed copies of our reports has dramatically declined, allowing us to phase out our internal printing capability. For the 17th consecutive year, GAO’s financial statements have received an unqualified opinion from our independent auditors. We prepared our financial statements for fiscal year 2003 and the audit was completed a month earlier than last year and a year ahead of the accelerated schedule mandated by OMB. For a second year in a row, the Association of Government Accountants awarded us a certificate of excellence; this year the award was for the fiscal year 2002 annual performance and accountability report. Given our role as a key provider of information and analyses to the Congress, maintaining the right mix of technical knowledge and expertise as well as general analytical skills is vital to achieving our mission. Because we spend about 80 percent of our resources on our people, we need excellent human capital management to meet the expectations of the Congress and the nation. Accordingly, in the past few years, we have expanded our college recruiting and hiring program and focused our overall hiring efforts on selected skill needs identified during our workforce planning effort and to meet succession planning needs. For example, we identified and reached prospective graduates with the required skill sets and focused our intern program on attracting those students with the skill sets needed for our analyst positions. Our efforts in this area were recognized by Washingtonian magazine, which listed GAO as one of the “Great Places to Work” in its November 2003 issue. Continuing our efforts to promote the retention of staff with critical skills, we offered qualifying employees in their early years at GAO student loan repayments in exchange for their signed agreements to continue working at GAO for 3 years. We also have begun to better link compensation, performance, and results. In fiscal year 2002 and 2003, we implemented a new performance appraisal system for our analyst, attorney, and specialist staff that links performance to established competencies and results. We evaluated this system in fiscal year 2003 and identified and implemented several improvements, including conducting mandatory training for staff and managers on how to better understand and apply the performance standards, and determining appropriate compensation. We will implement a new competency based appraisal system, pay banding and a pay for performance system for our administrative professional and support services staff this fiscal year. To train our staff to meet the new competencies, we developed an outline for a new competency-based and role- and task-driven learning and development curriculum that identified needed core and elective courses and other learning resources. We also completed several key steps to improve the structure of our learning organization, including hiring a Chief Learning Officer and establishing a GAO Learning Board to guide our learning policy, to set specific learning priorities, and to oversee the implementation of a new training and development curriculum. We also drafted our first formal and comprehensive strategic plan for human capital to communicate both internally and externally our strategy for enhancing our standing as a model professional services organization, including how we plan to attract, retain, motivate, and reward a high- performing and top-quality workforce. We expect to publish the final plan this fiscal year. Our Employee Advisory Council is now a fully democratically elected body that advises GAO’s senior executives on matters of interest to our staff. We also established a Human Capital Partnership Board to gather opinions of a cross section of our employees about upcoming initiatives and ongoing programs. The 15-member board will assist our Human Capital Office in hearing and understanding the perspectives of its customers—our staff. In addition, we will continue efforts to be ready to implement the new human capital authorities included in legislation currently pending before the Senate. This legislation, if passed, would give us more flexibility to deal with mandatory pay and related costs during tight budgetary times. Our resourceful management of information technology was recognized when we were named one of the “CIO (Chief Information Officer) 100” by CIO Magazine, recognizing excellence in managing our information technology (IT) resources through “creativity combined with a commitment to wring the most value from every IT dollar.” We were one of three federal agencies named, selected from over 400 applicants, largely representing private sector firms. In particular, we were cited for excellence in asset management, staffing and sourcing, and building partnerships, and for implementing a “best practice”—staffing new projects through internal “help wanted” ads. We have expanded and enhanced the IT Enterprise Architecture program we began in fiscal year 2002. We formally established an Enterprise Architecture oversight group and steering committee to prioritize our IT business needs, provide strategic direction, and ensure linkage between our IT Enterprise Architecture and our capital investment process. We implemented a number of user friendly Web-based systems to improve our ability to obtain feedback from our congressional clients, facilitate access to our information for the external customer, and enhance productivity for the internal customer. Among the new and enhanced Web-based systems were an application to track and access General Counsel work by goal, team, a Web site on emerging trends and issues to provide information for our teams and offices as they consult with the Congress; and an automated tracking application for our staff to monitor the status of products to be published. In addition, we developed and released a system to automate an existing data collection and analysis process, greatly expanding our annual capacity to review DOD weapons systems programs. As a result, we were able to increase staff productivity and efficiency and enhance the information and services provided to the Congress. In the past, we were able to complete a review annually of eight DOD weapons systems programs. In fiscal year 2003 we reviewed 30 programs and reported on 26. Within the next year, that number will grow to 80 per year. We recognize the ongoing, ever present threat to our shared IT systems and information assets and continue to promote awareness of this threat, maintain vigilance, and develop practices that protect information assets, systems, and services. As part of our continuing emergency preparedness plan, we upgraded the level of telecommunications services between our disaster recovery site and headquarters, expanded our remote connectivity capability, and improved our response time and transmission speed. To further protect our data and resources, we drafted an update to our information systems security policy, issued network user policy statements, hardened our internal network security, expanded our intrusion detection capability, and addressed concerns raised during the most recent network vulnerability assessment. We plan to continue initiatives to ensure a secure environment, detect intruders in our systems, and recover in the event of a disaster. We are also continuing to make the investments necessary to enhance the safety and security of our staff, facilities, and other assets for the mutual benefit of GAO and the Congress. In addition, we plan to continue initiatives designed to further increase employees’ productivity, facilitate knowledge sharing, and maximize the use of technology through tools available at the desktop and by reengineering the systems that support our business processes. On the basis of recommendations resulting from our physical security evaluation and threat assessment, we continue to implement initiatives to improve the security and safety of our building and personnel. In terms of the physical plant improvements, we upgraded the headquarters fire alarm system and installed a parallel emergency notification system. We completed a study of personal protective equipment, and based on the resulting decision paper, we have distributed escape hoods to GAO staff. We have also made a concerted effort to secure the perimeter and access to our building. Several security enhancements will be installed in fiscal year 2004, such as vehicle restraints at the garage ramps; ballistic-rated security guard booths; vehicle surveillance equipment at the garage entrances; and state-of-the-art electronic security comprising intrusion detection, access control, and closed-circuit surveillance systems. A team of international auditors, led by the Office of the Auditor General of Canada, will conduct a peer review for calendar year 2004 of our performance audit work. This entails reviewing our policies and internal controls to assess the compliance of GAO’s work with government audit standards. The review team will provide GAO with management suggestions to improve our quality control systems and procedures. Peer reviews will be conducted every 3 years. GAO is requesting budget authority of $486 million for fiscal year 2005. The requested funding level will allow us to maintain our base authorized level of 3,269 full-time equivalent (FTE) staff to serve the Congress, maintain operational support at fiscal year 2004 levels, and continue efforts to enhance our business processes and systems. This fiscal year 2005 budget request represents a modest increase of 4.9 percent over our fiscal year 2004 projected operating level, primarily to fund mandatory pay and related costs and estimated inflationary increases. The requested increase reflects an offset of almost $5 million from nonrecurring fiscal year 2004 initiatives, including closure of our internal print plant, and $1 million in anticipated reimbursements from a planned audit of the Securities and Exchange Commission’s (SEC) financial statements. Our requested fiscal year 2005 budget authority includes about $480 million in direct appropriations and authority to use $6 million in estimated revenue from reimbursable audit work and rental income. To achieve our strategic goals and objectives for serving the Congress, we must ensure that we have the appropriate human capital, fiscal, and other resources to carry out our responsibilities. Our fiscal year 2005 request would enable us to sustain needed investments to maximize the productivity of our workforce and to continue addressing key management challenges: human capital, and information and physical security. We will continue to take steps to “lead by example” within the federal government in these and other critical management areas. If the Congress wishes for GAO to conduct technology assessments, we are also requesting $545,000 to obtain four additional FTEs and contract assistance and expertise to establish a baseline technology assessment capability. This funding level would allow us to conduct one assessment annually and avoid an adverse impact on other high priority congressional work. We are grateful to the Congress for providing support and resources that have helped us in our quest to be a world class professional services organization. The funding we received in fiscal year 2004 is allowing us to conduct work that addressed many difficult issues confronting the nation. By providing professional, objective, and nonpartisan information and analyses, we help inform the Congress and executive branch agencies on key issues, and covered programs that continue to involve billions of dollars and touch millions of lives. I am proud of the outstanding contributions made by GAO employees as they work to serve the Congress and the American people. In keeping with my strong belief that the federal government needs to exercise fiscal discipline, our budget request for fiscal year 2005 is modest, but would maintain our ability to provide first class, effective, and efficient support to the Congress and the nation to meet 21st century challenges in these critical times. This concludes my statement. I would be pleased to answer any questions the Members of the Subcommittee may have. GAO Efforts That Helped to Change Laws and/or Regulations Consolidated Appropriations Resolution, 2003, Public Law 108-7. The law includes GAO’s recommended language that the administration’s competitive sourcing targets be based on considered research and sound analysis. Smallpox Emergency Personnel Protection Act of 2003, Public Law 108-20. GAO’s report on the National Smallpox Vaccination program highlighted volunteers’ concerns about losing income if they sustained injuries from an inoculation. This statute provides benefits and other compensation to covered individuals injured in this way. Postal Civil Service Retirement System Funding Reform Act of 2003, Public Law 108-18. Analyses performed by GAO and OPM culminated in the enactment of this law that reduces USPS’s pension costs by an average of $3 billion per year over the next 5 years. The Congress directed that the first 3 years of savings be used to reduce USPS’s debt and hold postage rates steady until fiscal 2006. Accountability of Tax Dollars Act of 2002, Public Law 107-289. A GAO survey of selected non-CFO Act agencies demonstrated the significance of audited financial statements in that community. GAO provided legislative language that requires 70 additional executive branch agencies to prepare and submit audited annual financial statements. Emergency Wartime Supplemental Appropriations Act, 2003, Public Law 108-11. GAO assisted congressional staff with drafting a provision that made available up to $64 million to the Corporation for National and Community Service to liquidate previously incurred obligations, provided that the Corporation reports overobligations in accordance with the requirements of the Antideficiency Act. Intelligence Authorization Act for Fiscal Year 2003, Public Law 107-306. GAO recommended that the Director of Central Intelligence report annually on foreign entities that may be using U. S. capital markets to finance the proliferation of weapons, including weapons of mass destruction, and this statute instituted a requirement to produce the report. GAO Efforts That Helped to Improve Services to the Public Strengthening the U.S. Visa Process as an Antiterrorism Tool. Our analysis of the U.S. visa-issuing process showed that the Department of State’s visa operations were more focused on preventing illegal immigrants from obtaining nonimmigrant visas than on detecting potential terrorists. We recommended that State reassess its policies, consular staffing procedures, and training program. State has taken steps to adjust its policies and regulations concerning the screening of visa applicants and its staffing and training for consular officers. Enhancing Quality of Care in Nursing Homes. In a series of reports and testimonies since 1998, we found that, too often, residents of nursing homes were being harmed and that programs to oversee nursing home quality of care at the Centers for Medicare and Medicaid Services were not fully effective in identifying and reducing such problems. In 2003, we found a decline in the proportion of nursing homes that harmed residents but made additional recommendations to further improve care. Making Key Contributions to Homeland Security. Drawing upon an extensive body of completed and ongoing work, we identified specific vulnerabilities and areas for improvement to protect aviation and surface transportation, chemical facilities, sea and land ports, financial markets, and radioactive sealed sources. In response to our recommendations, the Congress and cognizant agencies have undertaken specific steps to improve infrastructure security and improve the assessment of vulnerabilities. Improving Compliance with Seafood Safety Regulations. We reported that when Food and Drug Administration (FDA) inspectors identified serious violations at seafood processing firms, it took FDA 73 days on average, well above its 15-day target. Based on our recommendations, FDA now issues warning letters in about 20 days. Strengthening Labor’s Management of the Special Minimum Wage Program. Our review of this program resulted in more accurate measurement of program participation and noncompliance by employees and prevented inappropriate payment of wages below the minimum wage to workers with disabilities. Reducing National Security Risks Related to Sales of Excess DOD Property. We reported that DOD did not have systems and procedures in place to maintain visibility and control over 1.2 million chemical and biological protective suits and certain equipment that could be used to produce crude forms of anthrax. Unused suits (some of which were defective) and equipment were declared excess and sold over the Internet. DOD has taken steps to notify state and local responders who may have purchased defective suits. Also, DOD has taken action to restrict chemical-biological suits to DOD use only—an action that should eliminate the national security risk associated with sales of these sensitive military items. Lastly, DOD has suspended sales of the equipment in question pending the results of a risk assessment. GAO Efforts That Helped to Change Laws and/or Regulations Protecting the Retirement Security of Workers. We alerted the Congress to potential dangers threatening the pensions of millions of American workers and retirees. The pension insurance program’s ability to protect workers’ benefits is increasingly being threatened by long-term, structural weaknesses in the private-defined, pension benefit system. A comprehensive approach is needed to mitigate or eliminate the risks. Improving Mutual Fund Disclosures. To improve investor awareness of mutual fund fees and to increase price competition among funds, we identified alternatives for regulators to increase the usefulness of fee information disclosed to investors. Early in fiscal year 2003, the Securities and Exchange Commission issued proposed rules to enhance mutual fund fee disclosures using one of our recommended alternatives. GAO Efforts That Helped to Promote Sound Agency and Governmentwide Management Encouraging and Helping Guide Agency Transformations. We highlighted federal entities whose missions and ways of doing business require modernized approaches, including the Postal Service, and the Coast Guard. Among congressional actions taken to deal with modernization issues, the House Committee on Government Reform established a special panel on postal reform and oversight to work with the President’s Commission on the Postal Service on recommendations for comprehensive postal reform. We also reported this year on the Coast Guard’s ability to effectively carry out critical elements of its mission, including its homeland security responsibilities. We recommended that the Coast Guard develop a blueprint for targeting its resources to its various mission responsibilities and a better reporting mechanism for informing the Congress on its effectiveness. Our recommendations led to better reporting by the Coast Guard and laid the foundation for key revisions the agency intended to make to its strategic plan. Helping DOD Recognize and Address Business Modernization Challenges. Several times we have reported and testified on the challenges DOD faces in trying to successfully modernize about 2,300 business systems, and we made a series of recommendations aimed at establishing the modernization management capabilities needed to be successful in transforming the department. DOD has implemented some key architecture management capabilities, such as assigning a chief architect and creating a program office, as well as issuing the first version of its business enterprise architecture in May 2003. In addition, DOD has revised its system acquisition guidance. By implementing our recommendations, DOD is increasing the likelihood that its systems investments will support effective and efficient business operations and provide for timely and reliable information for decision making. Helping to Advance Major Information Technology Modernizations. Our work has helped to strengthen the management of the complex, multibillion-dollar information technology modernization program at the Internal Revenue Service (IRS) to improve operations, promote better service, and reduce costs. For example, IRS implemented several of our recommendations to improve software acquisition, enterprise architecture definition and implementation, and risk management and to better balance the pace and scope of the program with its capacity to effectively manage it. Improving Internal Controls and Accountability over Agency Purchases. Our work examining purchasing and property management practices at FAA identified several weaknesses in the specific controls and overall control environment that allowed millions of dollars of improper and wasteful purchases to occur. Such weaknesses also contributed to many instances of property items not being recorded in FAA’s property management system, which allowed hundreds of lost or missing property items to go undetected. Acting on our findings, FAA established key positions to improve management oversight of certain purchasing and monitoring functions, revised its guidance to strengthen areas of weakness and to limit the allowability of certain expenditures, and recorded assets into its property management system that we identified as unrecorded. Strengthening Government Auditing Standards. Our publication of the Government Auditing Standards in June 2003 provides a framework for audits of federal programs and monies. This comes at a time of urgent need for integrity in the auditing profession and for transparency and accountability in the management of scarce resources in the government sector. The new revision of the standards strengthens audit requirements for identifying fraud, illegal acts, and noncompliance, and gives clear guidance to auditors as they contribute to a government that is efficient, effective, and accountable to the people. Supporting Controls over DOD’s Credit Cards. In a series of reports and testimonies beginning in 2001, we highlighted pervasive weaknesses in DOD’s overall credit card control environment, including the proliferation of credit cards and the lack of specific controls over its multibillion dollar purchase and travel card programs. We identified numerous cases of fraud, waste, and abuse and made 174 recommendations to improve DOD’s credit card operations. DOD has taken many actions to reduce its vulnerabilities in this area. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO exists to support the Congress in meeting its constitutional responsibilities and to help improve the performance and ensure the accountability of the federal government for the benefit of the American people. GAO's work covers virtually every area in which the federal government is or may become involved, anywhere in the world. Perhaps just as importantly, our work sometimes leads us to sound the alarm over problems looming just beyond the horizon--such as our nation's enormous long-term fiscal challenges--and help policymakers address these challenges in a timely and informed manner. This testimony focuses on GAO's (1) fiscal year 2003 performance and results; (2) efforts to maximize our effectiveness, responsiveness, and value; and (3) budget request for fiscal year 2005 to support the Congress and serve the American people. In summary, the funding GAO received in fiscal year 2003 allowed it to conduct work that addressed many of the difficult issues confronting the nation, including diverse and diffuse security threats, selected government transformation challenges, and the nation's long-term fiscal imbalance. Perhaps the foremost challenge facing the government decision makers this year was ensuring the security of the American people. By providing professional, objective, and nonpartisan information and analyses, GAO helped inform the Congress and the executive branch agencies on key security issues, such as the nature and scope of threats confronting the nation's nuclear weapons facilities, its information systems, and all areas of its transportation infrastructure, as well as the challenges involved in creating the Department of Homeland Security. Its work was also driven by changing demographic trends, which led it to focus on such areas as the quality of care in the nation's nursing homes and the risks to the government's single-employer pension insurance program. Its work in these and other areas covered programs that involve billions of dollars and touch millions of lives. Importantly, in fiscal year 2003, GAO generated a $78 return for each $1 appropriated to the agency. With the Congress's support, GAO has demonstrated that becoming world class does not require a substantial increase in the number of staff authorized, but rather maximizing the efficient and effective use of the resources available to us. GAO has worked with Congress to obtain targeted funding for areas critical to GAO such as information technology, security, and human capital management. In keeping with the Comptroller General's belief that the federal government needs to exercise a greater degree of fiscal discipline, GAO has kept its request to $486 million, an increase of only 4.9 percent over fiscal year 2004. In keeping with the Congress' intent, GAO is continuing its efforts to revamp its budget presentation to make the linkages between funding and program areas more clear.
The federal Marketplace approved coverage for 11 of our 12 fictitious applicants who initially applied online, or by telephone. We later received notices in 10 of 11 of these cases that failure to submit documentation needed to verify eligibility could lead to loss of coverage or subsidies we received. For 1 of the 11 approvals, we initially were denied coverage, but were successful when we subsequently reattempted the application. Applicants for coverage are required to attest that they have not intentionally provided false or untrue information. Applicants who provide false information are subject to penalties under federal law, including fines and imprisonment. For each of the approved applications, we were ultimately directed to submit supporting documentation to the Marketplace, such as proof of income, identity, or citizenship. For each of our 11 approved applications, we paid the required premiums to put policies into force, and are continuing to pay the premiums. For the 11 applications that were approved for coverage, we obtained the advance premium tax credit in all cases. The total amount of these credits for the 11 approved applications is about $2,500 monthly or about $30,000 annually. We also obtained cost-sharing reduction subsidies, according to Marketplace representatives, in at least 9 of the 11 cases. As noted, these advance premium tax credits and cost-sharing reductions are not paid directly to enrolled consumers; instead, the federal government pays them to issuers on consumers’ behalf. To receive advance payment of the premium tax credit, applicants agree that they will file a tax return for the benefit year, and applicants receiving premium tax credits during the inconsistency period must indicate their understanding that premium tax credits are subject to reconciliation on their federal tax return. For each of our 6 online applications that were among the total group of 12, we failed to clear an identity checking step during the front end of the online application process, and thus could not complete the process online. However, we subsequently were able to obtain coverage for all 6 of these applications begun online by completing them by phone. In 5 of these 6 cases, the online system directed us to contact a Marketplace contractor that handles identity checking. The contractor was unable to resolve the identity issues. According to a CMS public information website, if the contractor cannot resolve the issue, applicants may be asked to provide identity documents, by online upload or by mail. In such cases, according to CMS officials, applications are to be put on hold until identity proofing is completed. For this group of 5 applications, however, contractor representatives did not ask us to submit identity documents but instead directed us to call the Marketplace. We did, and after speaking with Marketplace representatives as instructed, we were able to successfully proceed with our applications by phone and obtain coverage for the 5 applications. In the sixth case, the online system directed us to call the Marketplace directly, without contacting the contractor. In that case, too, we proceeded to successfully complete the application by phone and obtained coverage. According to CMS officials and executives of the Marketplace’s call center contractor, an identity discrepancy must be cleared and identity verified before an application can proceed to completion. For our 6 phone applications, we successfully completed the application process, with the exception of one applicant who declined to provide a Social Security number and was not allowed to proceed. In the course of follow-up dealings with the Marketplace, call-center representatives in at least four cases could not locate our existing applications and, as a result, began new applications, according to our conversations with the representatives. According to CMS call-center and document-processing contractors, multiple electronic applications have been common. The Marketplace is required to seek postapproval documentation in the case of certain application “inconsistencies”—instances in which information an applicant has provided does not match information contained in data sources that the Marketplace uses for eligibility verification at time of application, or such information is not available. If there is an application inconsistency, the Marketplace is to provide eligibility while the inconsistency is being resolved using “back-end” Under these controls, applicants will be asked to provide controls.additional information or documentation for a Marketplace contractor to review in order to resolve the inconsistency. Among the 11 of our 12 undercover applications that successfully obtained coverage, the Marketplace initially directed that we submit supplementary documentation in 10 cases, with a request for supplementary documentation in the 11th case coming a few months after approval of coverage. Among the Marketplace communications were the following: The Marketplace asked two of three applicants with inactive Social Security numbers to submit proof of citizenship, identity, and income, but it asked a third only for income information. In four cases, the Marketplace asked for additional documentation a few months after initial document requests were made. The Marketplace directed two applicants to log into online accounts for messages—but these applicants had no such online accounts. The Marketplace sent unclear reminders to three applicants to file supplementary documentation, with a cover letter directing applicants to submit one type of document to resolve a particular inconsistency (for example, income), but then in an enclosure to be returned to the Marketplace requesting that another type of document be sent (for example, citizenship). As part of our testing and in response to Marketplace requests, we provided counterfeit follow-up documentation, but varied what we submitted by application—providing all, none, or only some of the material requested—in order to note any differences in outcomes. Specifically, among the 10 applications for which we were directed to send documentation at the time of approval, we submitted all requested documentation for 3 of the 10 applications, partial documentation for 4 applications, and no documentation for the remaining 3 applications. In addition, in 2 cases in which we were directed to submit income information, we reported income substantially higher than the amount we initially stated on our applications, and at levels that should disqualify our applications from obtaining subsidies. CMS officials told us that a CMS contractor evaluates follow-up documentation on a rolling basis as it receives submissions. If the contractor deems the information submitted to be complete, a decision on eligibility is typically made within 1 to 2 days, according to the officials. Otherwise, applicants may be directed to submit additional information as deemed necessary. In all cases, CMS officials told us, applicants are to be notified of the outcome of the review of their submitted documentation. For the seven applications for which we elected to submit full or partial follow-up documentation, approximately 3 months have elapsed since we submitted the requested information. As of July 17, 2014, we had received notifications indicating the Marketplace had reviewed portions of the counterfeit documentation sent for two applications. Specifically, the Marketplace notified both these applicants that their proof of citizenship/immigration status had been verified and no further action is necessary. One of them also had identity verified. We are awaiting notice on other documents filed for these two applicants. In the time since we filed documents requested at time of approval, we have received a number of follow-up communications from the Marketplace, which, as noted earlier, include requests for documentation not originally requested. In response, we have submitted a second round of documents, which responds to the requests but also maintains our testing methodology of submitting all, none, or some of the items requested. As of July 17, 2014, outcomes were still pending for these applications. Regardless of the status of any postapproval communications, our coverage remains in effect for all 11 approved applications. Overall, among all applications for the federal Marketplace, about 4.3 million application inconsistencies have been identified, representing about 3.5 million people, according to the CMS contractor handling receipt and evaluation of submitted materials. Of the total inconsistencies, about 2.6 million are for applicants who took the step of selecting health care plans after completing their applications. As of mid-July 2014, about 650,000 inconsistencies had been cleared. However, according to contractor executives, due to system limitations, processing of income and citizenship/immigration status inconsistencies—which together account for 75 percent of inconsistency volume—began in May and June 2014.cannot be matched to their respective applications, and become “orphans.” As of mid-July 2014, the contractor said, there had been about 227,000 such documents. According to the contractor executives, unmatched documents are retained and reconsidered every 21 days to see if new information is available that can enable a match to be made. In some cases, according to the CMS contractor, documents As noted, applicants attest at the time of application that information they provide is not false or untrue. According to CMS officials, its document processing contractor is not required under its contract to authenticate documentation or to conduct forensic analysis. Executives of the contractor concurred, and told us the review standard the contractor uses is that it accepts documents as authentic unless there are obvious alterations. According to the executives, the contractor does not certify authenticity, does not engage in fraud detection, and does not undertake investigative activities. Specifically, in the contractor’s standard operating procedures for its work for CMS, document review workers are directed under “general verification guidance” to “determine if the document image is legible and appears unaltered by visually inspecting it.” Further, the contractor is not equipped to attempt to identify fraud, the contractor executives told us, and the contractor does not have the means to judge whether documents submitted might be fraudulent. The standard of accepting authenticity unless there is obvious alteration originated from CMS, the executives said. According to the contractor executives, when consumers send copies of documents, as directed, rather than originals, there inevitably is a loss of image quality such that the contractor could not closely examine whether a document is authentic.thoroughly analyze document authenticity, the CMS contractor executives Costs would increase by several times to told us. Even if such an effort was attempted, they said, it would be difficult to say if anti-fraud measures would be effective, because that is not the company’s business. The contractor also does not currently make use of outside data sources in its document review; instead, it inspects what documents are received. Overall, the contractor executives told us, the contractor is not aware of any fraudulent applications and that, based on its practices, it also is not in a position to know whether fraud is being attempted. CMS officials similarly told us they did not know the extent of any attempts at application or enrollment fraud, but said that to date, there is no evidence of applicants defrauding the federal Marketplace. In following through on our applications, we also identified a potential challenge to consumers obtaining information about review of documentation submitted. In communications we received from the Marketplace about our document submissions, we were directed to call the Marketplace with questions. When we called to inquire about the status of our document filings, representatives could not answer our questions. They told us they were not able to confirm receipt of requested documentation and were not able to provide information on whether requested documentation has been reviewed. The CMS contractors handling consumer calls and document verification each confirmed to us that the call centers cannot access document-submission information. Hence, it is currently not possible for a call-center representative, fielding an inquiry such as ours, to obtain document status information in order to provide that information to the consumer. Overall, CMS officials told us that they have internal controls for the eligibility-determination process, and that experience has not shown the need for any changes in that process. They said that thus far, the focus has been on stabilizing processes being implemented for the first time. Our work continues on the postapproval verification process. In particular, we are tracking whether we receive any additional adjudication notices from the CMS verification contractor, or whether the contractor identifies supporting documentation we submitted as fictitious or inconsistent with information submitted at time of application. We will continue to assess CMS’s management of the application and approval process through our ongoing work and consider any recommendations needed to address these issues. We attempted six in-person applications, in order to test income- verification controls only. Specifically, we sought to determine the extent to which, if any, in-person assisters would encourage our applicants to misstate income in order to qualify for either of the income-based PPACA subsidies. According to CMS, in-person assistance is to be available for those seeking assistance in filing applications.applications, we randomly chose three Navigators and three non- Navigators in the target areas of our selected states. For the in-person applications, because our sole interest was any potential advice on reporting income, we did not seek or obtain policies, as we did with our phone and online applications. During our testing, we visited one in-person assister and obtained information on whether our stated income would qualify for subsidy. In that case, a Navigator correctly told us that our income would not qualify for subsidy. However, for the remaining five in-person applications, we were unable to obtain such assistance. We encountered a variety of situations that prevented us from testing our planned scenarios, including the following: One of the three Navigators required that we make an appointment in advance by phone. When we were unable to reach the Navigator by phone, we made an in-person visit. The Navigator declined to provide assistance, or to schedule an appointment, saying instead we would need to phone to schedule an appointment to return. One of the three non-Navigators initially said it provides assistance only after people already have an application in progress. The non- Navigator did offer to assist us with an application, but the HealthCare.gov website was down. He directed us to call later for assistance. After we did so, this non-Navigator did not respond to three follow-up phone calls. Another of the three non-Navigators, a health care services company, told us it only handles applications from those having a medical bill at its medical facility. The third non-Navigator did not provide assistance, telling us it handles only applications for Medicaid. In two of the five instances in which we were unable to obtain assistance at our originally selected locations, we proceeded to seek assistance at other randomly selected locations in our target areas. In these follow-up attempts, we again encountered difficulty in obtaining assistance for our applicants, including the following: For one test, we visited two additional locations beyond the initial location before finding an in-person assister at a third who correctly told us our income was insufficient to qualify for subsidy. At the first two locations, we were told, among other things, that appointments were necessary. For another test, which occurred late in the open-enrollment period, non-Navigator representatives declined to provide help, telling us they were uncomfortable doing so and planned to take a seminar on enrollment. We further pursued, by phone calls to the Marketplace, the applications for which we could not get explicit in-person guidance on income and qualification for subsidy. In these calls, we were correctly advised that our income was outside the range eligible for income-based subsidy. Figure 1 summarizes our process and results for each of the groups of applicants—the 12 phone and online applications, and the six in-person attempts. The federal government, in administering the two income-based subsidies, makes payments to issuers of health insurance on behalf of eligible consumers who have enrolled in a qualified health plan. According to CMS officials, individuals are considered to be enrolled in a plan after they pay the initial premium. Thus, a key factor in analyzing enrollment in Marketplace coverage—and federal expenditures and subsidies that follow—is the ability to identify which applicants approved for coverage have subsequently paid premiums and put policies in force. According to HHS, more than 8 million people selected a plan for coverage during the initial open-enrollment period that ended in April. CMS officials, however, told us they are thus far unable to identify individuals who have made premium payments. Issuers have reported this information to CMS, but the agency has not yet created a system to process the information, according to CMS officials. In May 2014, CMS officials told us that work is underway to implement such a system. However, CMS does not have a timeline for completing and deploying this work. As a result, under current operations, CMS must rely on health insurance issuers to self-report enrollment data reflecting individuals for whom CMS owes the issuers the income-based subsidies arising from obtaining coverage through the Marketplace. We plan to continue examining this issue, among others, as part of our ongoing work, and to consider any recommendations needed to address it. Chairman Boustany, Ranking Member Lewis, and Members of the subcommittee, this concludes my statement. I would be pleased to respond to any questions that you may have. For questions about this statement, please contact Seto Bagdoyan at (202) 512-6722 or BagdoyanS@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include: Wayne A. McElrath, Director; Matthew Valenta, Gary Bianchi, and Kristi Peterson, Assistant Directors; Carrie Davidson; Paul Desaulniers; Sandra George; Robert Graves; Barbara Lewis; Maria McMullen; George Ogilvie; Shelley Rao; Ramon Rodriguez; Christopher H. Schmitt; Julie Spetz; Cherié Starck, Helina Wong; Elizabeth Wood; and Michael Zose. This appendix provides background on certain requirements related to the submission of applications and eligibility-verification procedures to enroll in qualified health plans and qualify for income-based subsidies under the Patient Protection and Affordable Care Act (hereafter PPACA). To be eligible to enroll in a qualified health plan offered through a marketplace established under PPACA, an individual must be a U.S. citizen or national, or otherwise be lawfully present in the United States; reside in the marketplace service area; and not be incarcerated (unless pending disposition of the charges). moderate-income individuals and families may be eligible for income- based subsidies authorized by PPACA to make coverage more affordable: (1) a refundable tax credit, generally paid on an advance basis, to reduce premium costs for marketplace coverage (referred to as premium tax credits) and (2) reductions in cost-sharing associated with such coverage (known as cost-sharing reductions) for items such as copayments for physician visits or prescription drugs. To qualify for either subsidy, an individual must meet applicable income requirements and must not be eligible for coverage under another qualifying plan or program, such as affordable employer-sponsored coverage, Medicaid, or the State Children’s Health Insurance Program. Subsidy payments are made to the issuer of the qualified health plan to offset the cost of the plan to the individual. PPACA, § 1312(f)(1), (3),124 Stat. at 183-184; 45 C.F.R. § 155.305(a). subsidy eligibility. Applicants for coverage are to attest that they have not intentionally provided false or untrue information. Applicants who provide false information are subject to penalties under federal law, including fines and imprisonment. Marketplaces are required by law to take several steps to verify application information to assess eligibility for enrollment in a qualified health plan and, if applicable, to qualify for an income-based subsidy. These verification steps include validating an applicant’s Social Security number, if one is provided; verifying an applicant’s citizenship, status as a national, or lawful presence with the Social Security Administration (SSA) and/or the Department of Homeland Security; verifying household income and family size against the most recent tax-return data from the Internal Revenue Service (IRS), as well as data on Social Security benefits from the SSA; and verifying whether the applicant is eligible for health coverage under another qualifying plan or program that would preclude eligibility for subsidy purposes. Where the marketplace identifies certain inconsistencies in an application that it cannot resolve through reasonable effort, the marketplace must undertake an “inconsistency process,” under which the applicant is given 90 days to present satisfactory evidence to resolve the identified inconsistencies. For example, the inconsistency process applies when the marketplace is unable to validate an individual’s Social Security number or attestation regarding citizenship or immigration status. It also applies when the marketplace is unable to verify eligibility for income- based subsidies, including, for example, if an applicant indicates a change in circumstances, such as substantial changes in income compared with the most recent tax return available, or IRS does not have recent tax-return data. During the inconsistency period, the marketplace must allow the applicant to enroll in a qualified health plan and, if applicable, authorize the advance payment of any premium tax credit or cost-sharing reduction to the applicant’s issuer on the basis of the applicant’s attestations. PPACA authorizes the Department of Health and Human Services to extend the 90-day period for enrollments occurring during 2014. PPACA, § 1411(e)(4)(A)(ii), 124 Stat. at 228. CMS regulations also generally permit the marketplaces to extend the 90-day period if the applicant has made a good faith effort to obtain documentation required to resolve the inconsistency. 45 C.F.R. § 155.315(f)(3). inconsistency period, an applicant also must attest to understanding that any advance payments of premium tax credits received during this period are subject to reconciliation. Marketplaces are required to permit applicants to receive less than the full amount of advance payments of the premium tax credits in order to minimize the possibility of having to repay such credits if their actual income for the benefit year is higher. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
PPACA provides for the establishment of health insurance exchanges, or marketplaces, where consumers can compare and select private health insurance plans. The act also expands the availability of subsidized health care coverage. The Congressional Budget Office estimates the net federal cost of coverage provisions at $36 billion for fiscal year 2014, with subsidies and related spending accounting for a large portion. PPACA requires marketplaces to verify application information to determine enrollment eligibility and, if applicable, eligibility for subsidies. GAO was asked to examine issues related to controls for application and enrollment for coverage through the federal marketplace. This testimony discusses preliminary observations on (1) results of undercover testing in which we obtained health care coverage; (2) additional undercover testing, in which we sought to obtain consumer assistance with our applications; and (3) delays in the development of a system needed to analyze enrollment. This statement is based on preliminary analysis from GAO's ongoing review for this subcommittee and other congressional requesters. GAO created fictitious identities to make applications through the federally facilitated exchange in several states by telephone, online, and in-person. The number and locations of the target areas are not disclosed because of ongoing testing. The results, while illustrative, cannot be generalized to the overall applicant or enrollment populations. GAO expects to issue a final report next year. Centers for Medicare & Medicaid Services (CMS) officials told us they have internal controls for health care coverage eligibility determinations. GAO's undercover testing addressed processes for identity- and income-verification, with preliminary results revealing questions as follows: For 12 applicant scenarios, GAO tested "front-end" controls for verifying an applicant's identity or citizenship/immigration status. Marketplace applications require attestations that information provided is neither false nor untrue. In its applications, GAO also stated income at a level to qualify for income-based subsidies to offset premium costs and reduce cost sharing. For 11 of these 12 applications, which were made by phone and online using fictitious identities, GAO obtained subsidized coverage. For one application, the marketplace denied coverage because GAO's fictitious applicant did not provide a Social Security number as part of the test. The Patient Protection and Affordable Care Act (PPACA) requires the marketplace to provide eligibility while identified inconsistencies between information provided by the applicant and by government sources are being resolved through submission of supplementary documentation from the applicant. For its 11 approved applications, GAO was directed to submit supporting documents, such as proof of income or citizenship; but, GAO found the document submission and review process to be inconsistent among these applications. As of July 2014, GAO had received notification that portions of the fake documentation sent for two enrollees had been verified. According to CMS, its document processing contractor is not required to authenticate documentation; the contractor told us it does not seek to detect fraud and accepts documents as authentic unless there are obvious alterations. As of July 2014, GAO continues to receive subsidized coverage for the 11 applications, including 3 applications where GAO did not provide any requested supporting documents. For 6 applicant scenarios, GAO sought to test the extent to which, if any, in-person assisters would encourage applicants to misstate income in order to qualify for income-based subsidies. However, GAO was unable to obtain in-person assistance in 5 of the 6 initial undercover attempts. For example, one in-person assister initially said that he provides assistance only after people already have an application in progress. The in-person assister was not able to assist us because HealthCare.gov website was down and did not respond to follow-up phone calls. One in-person assister correctly advised the GAO undercover investigator that the stated income would not qualify for subsidy. A key factor in analyzing enrollment is to identify approved applicants who put their policies in force by paying premiums. However, CMS officials stated that they do not yet have the electronic capability to identify such enrollees. As a result, CMS must rely on health insurance issuers to self-report enrollment data used to determine how much CMS owes the issuers for the income-based subsidies. Work is underway to implement such a system, according to CMS, but the agency does not have a timeline for completing and deploying it. GAO is continuing to look at these issues and will consider recommendations to address them.
The JSF program goals are to develop and field an affordable, highly common family of stealthy, next-generation strike fighter aircraft for the Navy, Air Force, Marine Corps, and U.S. allies. The JSF family consists of three variants. The conventional takeoff and landing (CTOL) variant will primarily be an air-to-ground replacement for the Air Force’s F-16 Falcon and the A-10 Warthog aircraft, and will complement the F-22A Raptor. The short takeoff and vertical landing (STOVL) variant will be a multirole strike fighter to replace the Marine Corps’ F/A-18C/D and AV-8B Harrier aircraft. The carrier-suitable (CV) variant will provide the Navy a multi- role, stealthy strike aircraft to complement the F/A-18E/F Super Hornet. DOD is planning to buy a total of 2,458 JSFs. The F-35 JSF was christened Lightning II in July 2006. Because of the program’s sheer size and the numbers of aircraft it will replace, the JSF is the linchpin of DOD’s long-term plan to modernize tactical air forces. It is DOD’s largest acquisition program, with total cost currently estimated at $300 billion; the longest in planned duration, with procurement projected through 2034; and the largest cooperative international development program. Our international partners are providing about $4.8 billion toward development, and foreign firms are part of the industrial base producing aircraft. They are expecting to procure a minimum of 646 CTOL and STOVL JSFs. DOD’s funding requirements for the JSF assume the benefits in reduced unit costs from these purchases. Figure 1 shows the JSF’s current procurement profile for U.S. and international partners. Partner purchases begin in 2009 and reach a maximum of 95 per year in fiscal year 2016. Total expected procurement in that peak year, including U.S. quantities, is 225 aircraft. The JSF is a single-seat, single-engine aircraft, designed to rapidly transition between air-to-ground and air-to-air missions while still airborne. To achieve its mission, JSF will incorporate low-observable technologies, defensive avionics, advanced onboard and offboard sensor fusion, internal and external weapons, and advanced prognostic maintenance capability. According to DOD, these technologies represent a quantum leap over legacy tactical aircraft capabilities. In several ways, JSF development is also more complex and challenging than the F-22A Raptor and F/A-18E/F Super Hornet programs, the other two contemporary aircraft that DOD is acquiring with JSF to recapitalize tactical air forces. The JSF program is simultaneously developing several airframes and engines for multiple customers and is projected to have significantly more lines of operational flight plan software code than the other aircraft. The JSF program began in November 1996 with a 5-year competition between Lockheed Martin and Boeing to determine the most capable and affordable preliminary aircraft design. Lockheed Martin won the competition. The program entered system development and demonstration in October 2001. At that time, officials planned on a 10½ years development period costing about $34 billion (amount includes costs of about $4 billion incurred before system development start). By 2003, system integration efforts and a preliminary design review revealed significant airframe weight problems that affected the aircraft’s ability to meet key performance requirements. Weight reduction efforts were ultimately successful but added substantially to program cost and schedule estimates. In March 2004, DOD rebaselined the program (2004 Replan), extending development by 18 months and adding $7.5 billion to development costs. Program officials also delayed the critical design reviews, first flights of development aircraft, and the low-rate initial production decision to allow more time to mitigate risks and mature designs. The total program acquisition cost estimate by the JSF program office has increased since our report last year, primarily due to higher projected procurement unit prices. The reported schedule for major events showed mostly minor slips. Engineering analyses continue to show performance requirements are met, but flight and ground tests planned through 2013 will be necessary to confirm these assessments. DOD and the contractor reported progress in several areas, including international partner agreements, first flights of a JSF prototype and test bed, and a more realistic procurement schedule. JSF costs increased since last year. Table 1 shows the evolution of cost, quantity, and delivery estimates from the initiation of system development, through the 2004 Replan, to the latest data available. It demonstrates the impacts of higher procurement costs on unit costs and schedule delays on the delivery of promised capabilities to the warfighters. The current estimate for procurement costs, dated December 2006, shows an increase of $23.4 billion (plus 10 percent) from the estimate of a year earlier and a total of $55.3 billion more (plus 28 percent) since 2004. Procurement cost increases were primarily due to (1) extending the procurement period seven years at lower annual rates, (2) increased future price estimates based on contractor proposals for the first production lot, (3) airframe material cost increases, and (4) increases resulting from design maturation. Offsetting a portion of the procurement cost increases were lower estimates for labor rates and subcontractor costs. The official development cost estimate has remained relatively constant since the 2004 Replan. However, there were significant changes in scope and planned use of funds in order to maintain that estimate as officials reduced requirements, did not include full funding for the alternate engine program despite congressional interest in the program, and spent management reserves much faster than budgeted. Management reserves are a pool of money set aside—in this case about 10 percent of the development contract value remaining—to handle unanticipated changes and other risks encountered as a development program proceeds. Weight growth early in development and subsequent problems resulting from late aircraft design changes and subsequent manufacturing inefficiencies depleted reserve funds to an untenable level by 2007. The program faced a probable contract overrun. DOD officials opted not to request additional funding and time to complete development and instead adopted a controversial plan that reduced budgeted funds for development test aircraft and flight plans in order to replenish management reserves from $400 million to about $1 billion, an amount deemed prudent to complete the development phase on time. This plan, known as the Mid-Course Risk Reduction plan, is discussed in more detail later in this report. Reported schedule slips for key events since last year’s report were minor for the most part, but schedules could worsen considerably if the delays in maturing the aircraft and engine designs and manufacturing test aircraft continue to push work effort into later years. This would further compress the time available to complete development and test efforts, affecting the scheduled start of initial operational test and evaluation and the full-rate production decision, and increasing the risk of further delivery delays. The CV’s critical design review, the last of three design reviews for the program, occurred in June 2007, seven months later than had been expected. The initial operational capability date for this variant was pushed out two years to March 2015, to provide more time to mature design and test this variant in the demanding carrier environment. The carrier variant is the least developed of the three, incorporates larger wings, is heavier, and has different speed and range performance requirements than the other two variants. On the basis of engineering analyses and computer modeling, the JSF program projects that the aircraft design will meet seven of the eight key performance parameters by the end of development. The aircraft is currently not meeting the interoperability parameter, but this depends on capabilities being developed outside the JSF program. Key performance parameters will be verified during ground and flight testing from 2010 to 2013. DOD and the contractor made solid progress this year in several areas that could establish a foundation to spur future successes. With almost 90 percent (in terms of dollars) of the acquisition program still ahead, these and other improvements could be leveraged to help better meet cost, schedule and performance goals. In February 2007, the United States and eight international partners signed the Production, Sustainment, and Follow-on Development Memorandum of Understanding, committing to purchase aircraft and continuing joint development activities. DOD reduced near-term procurement quantities and the rate of ramp- up to full rate production. These actions somewhat lessened the concurrency of development and production we have previously cited and make for a more achievable schedule. The prime contractor and major subcontractors continued to implement advanced design and development techniques and utilize extensive computer modeling and simulation in innovative ways for design, test, and integration activities. DOD and contractor officials also made good progress toward refining system capabilities, including establishing mission software requirements, with the goal of improving future program executability while still meeting warfighter requirements. First flights of the prototype test aircraft and a flying test bed occurred in fiscal year 2007. Both are viewed as important risk reducers in the test program and initial flights provided valuable and useful information, according to program and contractor officials. All test aircraft were in manufacturing during 2007. Low-rate initial production of the first two production aircraft and advance buys for the second production lot also got under way. Late in 2007, DOD officials approved a risky and controversial plan that replenishes management reserves by reducing development test aircraft and test flights in order to stay within current cost and schedule estimates. Difficulties in stabilizing aircraft designs and inefficient production of test aircraft resulted in spending management reserves faster than anticipated. The flight test program has barely begun, but faces substantial risks with reduced assets as delays in design and manufacturing continue to further compress the time available to complete development work prior to operational testing and to support the full-rate production decision. The JSF program is halfway to its planned completion, but is behind schedule and over cost. On the basis of evidence we have gathered, development costs can be expected to increase substantially from the current reported program estimate, and the time needed to complete development testing and subsequent initial operational testing will likely need to be extended, delaying the full-rate production decision now planned for October 2013. The Office of the Secretary of Defense (OSD) approved the Mid-Course Risk Reduction plan in September 2007. The plan reduces development test aircraft and test flights, and accelerates the reduction of the contractor’s development workforce in order to restore management reserves to the level considered prudent to complete the development contract as planned and within the current cost estimate. The test community and others within DOD believe the plan puts the development flight program at considerable risk and trades known cost risk today for unknown cost and schedule risk in the future. Management reserves are budgeted funds set aside for unanticipated development challenges and increase a program’s capacity to deal with unknowns. At development start, JSF budgeted reserves at 10 percent of contract value and expected to draw on them at about the same rate as contract execution. However, the program has had to use these funds much faster than expected to pay for persistent development cost increases and schedule delays. A combination of factors contributed to this problem, such as late release of engineering drawings, production taking longer than planned, and late delivery of parts from suppliers. In turn, these contributed to continuing cost and schedule impacts in the manufacture of development test aircraft, including extensive and inefficient out-of-station work and delays in proving out the production schedule. Figure 2 shows how management reserves totaling almost $1.4 billion have been depleted since the 2004 Replan. By mid-2007, the development program had completed one-half of the amount of work scheduled, but had expended two-thirds of the budget. Management reserves had shrunk to about $400 million, less than one-half the amount officials believed necessary to complete the final 6 years of development. At the same time, the program faced significant manufacturing and software integration challenges, costly flight testing, and $950 million in other known cost risks. This presented the program with a likely untenable contract overrun sometime in 2008 if no action was taken. JSF program management identified a continuing persistent cost variance of $250 million to $300 million in the aircraft development contract and the associated shortfall in reserves that required near-term action beyond “belt tightening.” An overarching integrated product team considered several alternative actions, including doing nothing and adding funds from procurement, but the team chair concluded that replenishment of the management reserve was essential to position the JSF program to successfully address its anticipated future development challenges. This option, dubbed the Mid- Course Risk Reduction Plan, removed two development aircraft (one CTOL and one CV), eliminated approximately 850 test flights from the current test plan, revised the verification strategy, increased the use of ground test labs and the flying test bed, and maximized the number of test points to be accomplished during test flights. The plan also accelerated reductions in contractor staff and took other actions. In total, these planned actions are expected to add between $470 million and $650 million into the reserve to recapitalize it to about $1 billion, an amount officials believe will be needed to complete development. Officials intend to use reserves to recover cost and schedule losses in manufacturing and to cover additional future needs. This plan was subsequently approved by OSD, although serious risks were acknowledged and the team was divided on whether the added risks outweighed the intended benefit. Those in favor of the plan believed that actions were urgently needed to fix the funding imbalance and avoid a contract overrun. In this view, the plan would serve as a stopgap measure to delay another program restructure until more program knowledge and a clearer understanding of future cost requirements were gained. Officials from several defense offices thought the risks to testing were too great and that the plan did not address the underlying design and manufacturing problems. The Director, Operational Test and Evaluation, identified specific risks associated with the revised test verification strategy and recommended against deleting the aircraft, citing inadequate capacity to handle the pace of mission testing, and for ship suitability, signature testing, and suitability evaluations. This increased the likelihood of not finding and resolving critical design deficiencies until operational testing, when it is more costly and disruptive to do so. OSD’s Systems and Software Engineering office concurred, expressed concerns that the plan did not treat the root causes of ongoing production problems, and doubted that the contractor schedule was achievable. The Cost Analysis Improvement Group and others agreed that there was too much risk in reducing test assets at this time since no production representative variant had started flight tests and no analysis of the management reserve depletion had been completed. In summary, the plan trades known cost risk today for unknown cost and schedule risk in the future. According to our analysis of available evidence, manufacturing test aircraft continue to run behind schedule. The prime contractor has revised the test aircraft manufacturing schedule three times, resulting in slips of up to 16 months in first flight dates of test aircraft. To date, about 3 months of progress has been made for every 4 months of effort. As officials for now have decided not to extend the development period and delay operational tests and full-rate production, this inefficiency increases risk and further compresses time and assets available to complete test activities. Repercussions from the late release of engineering drawings, design changes, and parts shortages continue to cause delays and force inefficient production line workarounds where unfinished work is completed out of station. Production data provided by the Defense Contract Management Agency (DCMA) show continuing critical part shortages, high change traffic, out-of-station work, quality issues, and planning rework. These conditions have also delayed efforts to mature and demonstrate the production process even as work begins on the first production lot. The contractor has not yet proven it can efficiently build the JSF, and test aircraft are being built differently from the process expected for the production aircraft. The first test aircraft, a non-production-representative conventional landing prototype completed in 2006, required 65,000 more labor hours (about 35 percent more) to build than planned. It encountered most of its inefficiencies in the wing and final assembly phases. The second test aircraft, a STOVL model, left the production line in December 2007, and its first flight is expected in May 2008, 8 months later than originally scheduled. It cost about 25-30 percent more to build than planned. Contractor data show that the wings were only three-fifths complete when moved to final assembly. As a result, over 25,000 more labor hours had to be performed out of station to complete the wing assembly for this aircraft. Table 2 shows work performance on the first seven test aircraft to enter manufacturing. (This does not include the original prototype completed in December 2006.) These data show that nearly all aircraft are persistently behind schedule in completing work on these three critical components at the Fort Worth, Texas, facility. In terms of cost, the data show overall good performance in constructing the forward fuselage, but poor results for the wing and final assembly. Because of production inefficiencies and delays, the contractor has had to lengthen the manufacturing schedule three times to provide more time to complete work. Production line problems have resulted in slips of between 11 and 16 months to first flight dates for each variant. At the time of our review, a fourth schedule was being prepared that would add another 1 to 4 months to schedules. Officials are reporting some improvements in parts shortages, assembly, and product quality, but expect the cascading effects from the design delays and manufacturing inefficiencies to linger for another couple of years. The flight test program has just begun, with only about 25 flights completed as of January 2008. The program had originally planned to conduct development flight tests using 15 aircraft. The recent decision to reduce test aircraft to 13 (including the prototype), cut back the number of flights, and change how some capabilities are tested will stress resources, compress time to complete testing, and increase the number of development test efforts that will overlap the planned start of operational testing in October 2012. Test officials are concerned that capacity will be too constrained to meet schedules and to adequately test and demonstrate aircraft in time to support operational testing and the full-rate production decision in October 2013. The full extent of changes and impacts from a revised test verification strategy are still evolving. Program officials reported that if test assets become too constrained, production aircraft may eventually be used to complete development testing. The number of development flight tests had already been reduced twice before the Mid-Course Risk Reduction plan, as shown in figure 3. Test flights have now been reduced by more than 1,800 flights (26 percent) over the last 2 years. Other test issues and events included the following: Flight tests started with the initial development test aircraft, which is not considered to be a production-representative aircraft. According to program officials, initial flights of this aircraft yielded very useful information on flight characteristics. However, three incidents—an electrical flight control actuator malfunction in-flight and an engine blade failure during a ground test—delayed further testing from May to December 2007. Another blade failure occurred in February 2008. Initial flights of the Cooperative Airborne Test Bed aircraft in 2007 verified its airworthiness, and it was then modified to integrate some JSF systems hardware and software. In December 2007, it began some limited mission flight tests, but is not yet fully configured. The Mid- Course Risk Reduction plan revised the development test verification strategy to increase reliance on this specially configured aircraft to test capabilities that were going to be demonstrated on JSF aircraft. An operational assessment by testers from the Navy, Air Force, and the United Kingdom’s Royal Air Force was accomplished from March 2004 to December 2005 to assess development progress and current JSF mission capability. The February 2006 report concluded that the baseline flight test schedule provided little capability to deal with unforeseen problems and still meet the scheduled start of operational test and evaluation in October 2012. Testing officials said the JSF flight test program was following the historical pattern of legacy programs in making overoptimistic plans and using assumptions not supported by historical data. In legacy aircraft, these practices resulted in capacity constraints, program slips, and reduced testing tasks. We note that these concerns about the JSF were expressed at a time when the test program was expected to have the full complement of 15 test aircraft, not the 13 now planned. A program as complex and technically challenging as the JSF would be expected to have some setbacks, but we believe that the cause of many cost and schedule problems can be traced to an acquisition strategy and decisions at key junctures that did not adequately follow the best practices we have documented in successful commercial and government programs. The JSF started system development before requisite technologies were ready, started manufacturing test aircraft before designs were stable, and moved to production before flight tests have adequately demonstrated that the aircraft design meets performance and operational suitability requirements. We previously reported that the JSF acquisition strategy incorporated excessive overlap in development and production, posing substantial risks for cost overruns, schedule slips, and late delivery of promised capabilities to the warfighter. Six years after system development start, only two of the JSF’s eight critical technologies are mature by best practice standards, three are approaching maturity, and three are immature. Maturing critical technologies during system development led to cost growth. For example, development costs for the electric-hydraulic actuation and power thermal management systems have increased by 195 and 93 percent respectively since 2003. All three variants fell significantly short of meeting the best practices standard of 90 percent of drawings released at the times of their respective critical design reviews: 46 percent for the STOVL, 43 percent for the CV, and 3 percent for the CTOL. Design delays and changes to designs were cited by the Mid-Course Risk Reduction team as the precipitating cause leading to the depletion of management reserves. The late release of drawings resulted in a cascading of problems in establishing suppliers and manufacturing process, which led to late parts deliveries, delayed the program schedule, and forced inefficient manufacturing processes to compensate for the delay. Also, the program began initial low-rate production in 2007 before delivering an aircraft that fully represents the expected design. Efforts to mature production are constrained because the designs are not fully proven and tested, and manufacturing processes are not demonstrated. A fully integrated, capable production aircraft is not expected to enter flight testing until fiscal year 2012, increasing risks that problems found may require redesign, production line changes, and retrofit expenses for aircraft already built. On the basis of the evidence, we expect JSF program costs to increase and the schedule worsen to the point where the development period will likely need to be extended and Initial Operational Test and Evaluation (IOT&E) and full-rate production delayed. A major program restructure seems inevitable, unless significant elements of the program can be safely eliminated or deferred. The Mid-Course Risk Reduction plan does not directly address design and manufacturing inefficiencies that created the problem in the first place. If the root causes are not identified and fixed, the rapid depletion of management reserves can be expected to continue, and more funding will be needed to complete development. There is no reason to believe that these problems can be easily and quickly fixed. While there have been some assembly line improvements, program officials expect the manufacturing problems to persist for about 2 more years. Officials hope this plan will give them a period of time to better and more fully assess all the issues and reevaluate development costs and schedule requirements. They are depending on the revised test verification plans to maintain the pace and efficacy of development testing, but the test community is dubious. What seem more likely are additional costs and time to overcome inadequate capacity and the elimination or deferral of more test activities. Eliminating development test activities and deferring additional tasks to be completed during operational testing increase the likelihood that design and performance problems will not be identified and resolved until late in the program, when it is more costly and disruptive and could delay the delivery of capabilities to the warfighter. There are also abundant other indicators that acquisition costs will substantially increase from what is now being reported to Congress. Specifically: DOD has identified billions of dollars in unfunded requirements that are not in the program office estimate, including additional tooling and procurement price hikes. A new manufacturing schedule in the works indicates continued degradation in the schedule and further extends times for first flights. Both the aircraft and engine development contracts have persistent, substantial cost variances that cost analysts believe are too large and too late in the program to resolve without adding to budget. The prime contractor and program office are readying a new estimate at completion, which is expected to be much larger than what is now budgeted. Three defense organizations independent of the JSF program office have all concluded that the program office’s cost estimate is significantly understated and the current schedule unlikely to be achieved. For these and other reasons, we believe that the current JSF cost and schedule reported to Congress are not reliable for decision making, as discussed next. The $299.8 billion acquisition cost estimate for the JSF program is not reliable because it is not sufficiently comprehensive, accurate, documented, or credible. GAO’s Cost Assessment Guide outlines best practices used throughout the federal government and industry for producing reliable and valid cost estimates. We assessed the cost- estimating methodologies used by the JSF program office against these best practices and determined that certain key costs were excluded, assumptions used were overly optimistic, documentation was inadequate, and no analysis had been done to state the confidence and certainty the program office had in its cost estimate. As a result of these weaknesses, the JSF program acquisition cost estimate is not reliable for decision making. Appendix II contains a more detailed discussion of the specific shortcomings we and the other DOD organizations have found in the program office cost-estimating methodologies and their potential impacts. Estimates are comprehensive when they contain a level of detail that ensures that all pertinent costs are included and no costs are double- counted. It is important to ensure the completeness, consistency, and realism of the information contained in the cost estimate. Our review of the JSF development cost estimate showed that there are several cost categories totaling more than $10 billion that are excluded or underreported in the program office estimate. These items are summarized in table 3 below. The current acquisition cost estimate includes only near-term development funding for the alternate engine program, excluding procurement-related and other development costs of about $6.8 billion. The military services have not firmly established basing needs for the entire planned JSF force, but an earlier top-line estimate for military construction was at least $2 billion. The current total cost estimate includes only near-term budgeted costs of $533 million. The JSF program recently increased its estimate of tooling costs by $2.1 billion due to the inclusion of additional tooling requirements and estimating methodology changes. Cost and performance trade-offs during development deferred some requirements from the current program that may later require additional funding. The program office has not quantified these deferrals, but Naval Air Systems Command (NAVAIR) officials told us that the amount could be in the billions of dollars. Estimates are accurate when they are based on an assessment of the costs most likely to be incurred. Therefore, when costs change, best practices require that the estimate be updated to reflect changes in technical or program assumptions and new phases or milestones. DOD’s Cost Analysis Improvement Group (CAIG) found that the assumptions the JSF program office used for weight growth, staffing head counts, commonality savings for similar parts, and outsourced labor rate savings were overly optimistic and not supported by historical data. For example, the program office had used a 3 percent factor for weight growth whereas the CAIG used a 6 percent factor more in line with historical data from other programs. With three variants, a joint program with international participation, three different engines (cruise, second engine, and lift) in development, and more than double the amount of operational flight software lines of code than the F-22A and more than four times that of the F/A-18E/F, the JSF program is substantially more complex than the F-22 or F/A-18E/F, and therefore may not merit assumptions that are even as optimistic as the historical data for those programs. The program cost estimate is also considered inaccurate because it relies on data and reports found to be deficient. JSF program office used Lockheed Martin earned value management (EVM) data in estimating development costs. However, DCMA determined that the data as being of very poor quality and issued a report in November 2007 stating that it is deficient to the point where the government is not obtaining useful program performance data to manage risks. Among other problem areas, DCMA found that the contractor was using management reserve funds to alter its own and subcontractor performance levels and cost overruns. DCMA officials who conducted the review told us that the poor quality of the data invalidated key performance metrics regarding cost and schedule, as well as the contractor’s estimate of the cost to complete the contract. At the time of our review, corrective actions and plans were in process. Cost estimates are well documented when they can be easily repeated or updated and can be traced to original sources through auditing. Rigorous documentation increases the credibility of an estimate and helps support an organization’s decision-making process. The documentation should explicitly identify the primary methods, calculations, results, rationales, assumptions, and sources of the data used to generate each cost element. All the steps involved in developing the estimate should be documented so that a cost analyst unfamiliar with the program can recreate the estimate with the same result. We found that the JSF cost model is highly complex and the level of documentation is not sufficient for someone unfamiliar with the program to easily recreate it. Specifically, we found that the program office does not have formal documentation for the development, production, and operation and support cost models and could not provide detailed documentation such as quantitative analysis to support its assumptions. For the development cost estimate, the JSF program officials said they did not have a cost model that was continually updated with actual costs. Instead the program office relies heavily on earned value management data and contractor analysis to update its development cost estimate. Estimates are credible when they have been cross-checked with an independent cost estimate and when a level of uncertainty associated with the estimate has been identified. An independent cost estimate provides the estimator with an unbiased test of the reasonableness of the estimate and reduces the cost risk associated with the project by demonstrating that alternative methods generate similar results. Several independent organizations have reviewed the JSF program and are predicting much higher costs than the program office. Table 4 below provides a summary of these assessments. CAIG estimates were prepared using different and more realistic assumptions and schedule projections than the program office estimate. NAVAIR, which provides resources to the JSF program office cost- estimating function, derived much higher cost estimates and a longer development period based on historical cost performance and removing what it considered to be artificial and unachievable schedule constraints. Officials were also concerned about the amount and future impact of requirements potentially traded or pushed off into the procurement phase, which could be even more costly. DCMA projected higher development costs for the aircraft contract based on adjusted cost and schedule performance to date and assuming additional slips. Officials continue to examine the contractor’s deficient earned value management system and its misreporting of cost and schedule data. The JSF program has not conducted a fully documented independent cost estimate since system development start in 2001. Despite reliability concerns and all the significant events and changes in cost, schedule, and quantity since then—those reported by the program office as well as those identified by other defense organizations and us—DOD does not intend to accomplish another one until required to support the full-rate production decision in 2013. If so, this will mean that the program—DOD’s largest acquisition and vitally important to our allies—will have a 12-year gap between official validated cost estimates. The program may complete development and be 6 years into production before an accurate, up-to- date, and reliable official cost estimate is done. Despite widely held views that costs will likely be higher and the schedule longer than reported, the JSF program continues to be funded to the level of the program office estimate. DOD acquisition policy requires fully documented total program life-cycle cost estimates, with validation by the CAIG, at certain major decision points and when mandated by the milestone decision authority. DOD officials decided not to do such an estimate at the start of low-rate initial production in 2007, which typically coincides with a major milestone. The JSF is entering its most challenging phase as it finalizes three designs, matures manufacturing processes, conducts flight tests, and ramps up production. The first and foremost challenge is maintaining affordability in three dimensions—reasonable procurement prices, stable annual funding, and economical life-cycle operating and support costs. If affordability is not maintained during the acquisition program, quantities bought by the United States and allies may either decrease or else consume more of the available defense budgets. Over the life cycle of a system, higher costs for maintaining readiness and maintainability drive up annual operating expenses and may limit funds for new investments. Other program challenges could affect future quantities and the mix of aircraft procured by the United States and our allies. From its outset, the JSF goal has been to develop and field an affordable, highly common family of strike aircraft. Rising unit procurement prices, and somewhat lower commonality than expected, raise concerns that the United States and its allies may not be able to buy as many aircraft as currently planned. Average unit procurement costs are up 27 percent since the 2004 Replan and 51 percent since the start of system development (see table 1). Rising prices erode buying power, likely resulting in reduced quantities and delays in delivering promised capabilities to the warfighter. The program also places an unprecedented demand for funding on the defense budget—an annual average of about $11 billion for the next two decades--with attendant funding risk should political, economic or military conditions change. The JSF will have to annually compete with other defense and nondefense priorities for the shrinking discretionary federal dollar. To complete the acquisition program as currently planned, JSF will require about $269 billion from 2008 through 2034. Annual funding requirements for procurement increase rapidly as production ramps up to the full-rate production decision expected in October 2013. During the peak years of production, JSF procurement funding requirements are expected to average about $12.5 billion per year for the 12-year period spanning fiscal years 2012-2023. Figure 4 illustrates the annual funding requirements as of December 2006 and contrasts these with plans from prior years. The December 2003 line shows the funding profile resulting from the 2004 Replan and the 2005 line shows the jump in funding needed to accommodate program cost increases in the period following the Replan. The 2006 data reflect the impact on annual funding requirements from extending procurement 7 years. The extension reduced annual budget amounts, but requires continued funding through 2034 to procure deferred quantities. DOD calculated that the extension added $11.2 billion to total procurement cost. A third aspect of affordability is the life-cycle cost of ownership. DOD is recapitalizing its tactical air forces by replacing aging legacy systems with new, more capable systems, like the JSF, that incorporates reliability and maintainability features designed to reduce future operating costs. Recently, DOD sharply increased its projection of JSF operating and support costs compared to previous estimates. The December 2006 SAR projected life-cycle operating and support costs for all three variants at $650.3 billion, almost double the $346.7 billion amount shown in the December 2005 SAR and similar earlier estimates. The operating cost per flying hour for the JSF CTOL is now estimated to be greater than current flying hour cost for the F-16, one of the legacy aircraft to be replaced. Officials explained that the amounts reported in 2005 and before were early estimates based on very little data, whereas the new estimate is of higher fidelity, informed by more information as JSF development progresses and more knowledge is obtained. Factors responsible for the increased cost estimate included a revised fielding and basing plan, changes in repair plans, revised costs for depot maintenance, increased fuel costs, increased fuel consumption, revised estimates for manpower and mission personnel, and a new estimate of the cost of the JSF’s autonomic logistics system. Overall, the cost of ownership represents a very large and continuing requirement for the life of fielded aircraft. According to the new estimate, we calculate that DOD will incur about $24 billion per year to operate and support JSF units, assuming the quantities now planned and an 8,000-hour service life for each JSF aircraft fielded over time. From the inception of the program, DOD has anticipated major cost savings from developing and fielding JSF variants that share many common components and subsystems. While a degree of commonality has been achieved, expectations are now lower than they were at program start. Substantial commonality has been maintained for the mission systems among all three variants and for the propulsion system of the conventional and carrier variants. However, commonality among airframes and vehicle systems has declined overall since the start of system development. Figure 5 shows the decline in airframe commonality, the most costly of the four major categories. For example, in October 2001 DOD anticipated that the CTOL airframe would be more than 60 percent common with the other variants. Commonality had declined to about 40 percent by December 2006. Lesser commonality will likely increase acquisition and future support costs. The current JSF program shows a total quantity of 680 aircraft to be procured by the Department of the Navy, but the allocation between the CV and STOVL variants has not been officially established. We observe that the Navy and Marine Corps have somewhat divergent views on the quantities, intended employment, and basing of JSF aircraft. The Navy wants the Marine Corps to buy some CV variants and continue to man some of its carrier-based squadrons. The Marine Corps, however, wants to have a future strike force composed solely of the STOVL variant and has established a requirement for 420 aircraft. During conflicts, the Marines plan to forward deploy JSFs to accompany and support the expeditionary ground forces. Navy officials told us that they have some time to make decisions because they will be buying a mix of both CVs and STOVLs in the early years of production and that funding requirements are not significantly affected since unit prices for both variants are about the same. However, we believe the continuing disagreements on basing, employment, and force mix will have increasingly stronger impacts on JSF plans, costs, and international partner relations. Decreased quantities of STOVLs bought by the Department of the Navy would likely result in higher unit prices paid by the Marine Corps and two allies buying STOVLs. Fundamental decisions on the mix of naval aircraft also affect future operating and support costs, military construction, and carrier requirements. Officials also have some reservations whether they can afford the quantities now planned at peak production rates. Navy and Marine Corps officials told us last year that buying the JSF at the current planned rate— requiring a ramp-up to 50 CV and STOVL aircraft by fiscal year 2015—will be difficult to achieve and to afford, particularly if costs increase and schedules slip. Officials told us that a maximum of 35 per year was probably affordable, given budget plans at that time. Weight growth was the most significant challenge faced by the JSF program early in development. Redesign efforts to address weight growth was the single largest factor causing the $10 billion cost increase and 18- month extension in the development schedule since the start of system development. While the weight increase has been addressed for now, projections are that the aircraft weight will continue to increase during the balance of the development period, consistent with weight increases seen on legacy aircraft programs. According to an OSD official with knowledge of legacy aircraft development efforts, half of all weight growth during the development effort can be typically expected after first flight but prior to initial operational capability, and that additional small but persistent weight increases can be expected during the aircraft’s service life. First flight of a production-representative JSF has not yet occurred, and weight is running very close to the limits as evaluated by engineering analyses and trend extrapolation. As designs continue to mature and flight testing intensifies, maintaining weight within limits to meet warfighter capability requirements will be a continuing challenge and pose a major risk to meeting cost, schedule, and performance goals. The clear implication from performance to date and the Mid-Course Risk Reduction plan is that additional costs and time will be needed to complete JSF development. The plan to recapitalize management reserve at the expense of test assets is risky with potential major impacts down the road on costs, performance requirements, and fielding schedules. The remaining development effort will be less robust than originally planned and depends on a revised test verification strategy that is still evolving. As a result, the development effort has an increased risk of not fully measuring JSF capabilities and deficiencies prior to operational testing and could result, in the words of one DOD official, in the future operational test period being one of discovery rather than validation of the aircraft’s capabilities and deficiencies. Finding and fixing deficiencies during operational testing and after production has ramped up is costly, disruptive, and delays getting new capabilities to the warfighter. Because the program cost estimate is not reliable when judged against best standards, the decision making and oversight by Congress, top military leaders, and our allies are diminished. The picture they do have is one where costs continue to rise and schedules slip. The situation will be considerably worsened if the cost estimates of defense offices outside the program are more accurate than the conservative, official in-house estimates. Waiting 12 years between fully documented and validated total program cost estimates is contrary to policy and good management, given all the changes in cost, quantity, schedules, and other events that have occurred since the 2001 estimate. The size of the JSF acquisition, its impact on our and allied tactical air forces, and the unreliability of the current estimate argue for an immediate new and independent cost estimate and uncertainty analysis. This is critical information needed by DOD management to make sound trade-off decisions against competing demands and by Congress to perform oversight and hold DOD accountable. Program problems and setbacks must be put into perspective: The JSF is DOD’s largest and most complex aircraft acquisition and an integral component of the future force. Problems happen in such an environment. Progress has been made and some significant challenges overcome, but more await as program moves into flight testing and low-rate production. Maintaining affordability so the United States military and our allies can buy, field, and support the numbers needed by the warfighter remains the overarching challenge. Because of the elevated risks and the valid objections raised by the test community and other DOD offices, we recommend that the Secretary of Defense direct elements of the department to revisit and, if appropriate, revise the Mid-Course Risk Reduction plan recently approved. This should be supported by an intensive analysis that includes causes of management reserve depletion, an evaluation of progress against the baseline manufacturing schedule, and the progress made in correcting deficiencies in the contractor’s earned value management system. It should also include an in-depth examination of alternatives to the current plan and address the specific concerns raised by officials regarding testing capacity, the integration of ground and flight tests, and backup plans should capacity become overloaded. So that DOD may have an accurate picture of JSF cost and schedule requirements, and that Congress may have an accurate understanding of future funding requirements, we recommend that the Secretary of Defense direct that 1. The JSF program office update its cost estimate using best practices, so that the estimate is comprehensive, accurate, well documented, and credible. Specifically, the JSF program office should include costs that were inappropriately omitted from the estimate; identify performance requirements that have been traded off in development; fully document assumptions, data sources and methodologies in the cost model; and perform a risk and uncertainty analysis to focus on key cost drivers and reduce the risk of cost overruns. 2. The program conduct a full Schedule Risk Analysis to ensure that its schedules are fully understood, manageable, and executable; 3. DOD conduct a full, independent cost estimate should be conducted according to the highest standards of any DOD cost estimating organization, based on a comprehensive review of program data; that this cost estimate be reviewed by an independent third party such as the CAIG; and that the results of these estimates be briefed to all interested parties in DOD and Congress. DOD provided us with written comments on a draft of this report. The comments appear in appendix III. DOD also provided several technical comments, which we incorporated in this report. DOD substantially agreed with our recommendation to revisit the Mid- Course Risk Reduction plan. DOD stated that the plan is a cost-effective approach with a manageable level of risk that will be monitored and revised if necessary. We believe the plan’s reduction of test resources will hamper development testing and that the Department will eventually have to make programmatic adjustments, adding cost and time. DOD also substantially agreed with our three recommendations on cost estimating. DOD indicated that it will implement all elements except the risk and uncertainty analysis, which is unwarranted in its view. We believe that risk and uncertainty analysis is an important tool that establishes a confidence interval for a range of possible costs—as opposed to a single- point estimate—and facilitates good management decisions and oversight. Such analysis is a best practice in our Cost Assessment Guide and we note that OSD’s Cost Analysis Improvement Group supports and uses this cost- estimating tool. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; and the Director of the Office of Management and Budget. We will also provide copies to others on request. In addition, the report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members making key contributions to this report are listed in appendix IV. To determine the Joint Strike Fighter (JSF) program’s progress in meeting cost, schedule, and performance goals, we received briefings by program and contractor officials and reviewed financial management reports, budget documents, annual Selected Acquisition Reports, monthly status reports, performance indicators, and other data. We compared reported progress with prior years’ data, identified changes in cost and schedule, and obtained officials’ reasons for these changes. We interviewed Department of Defense (DOD), JSF program, and contractor officials to obtain their views on progress, ongoing concerns and actions taken to address them, and future plans to complete JSF development and ramp up procurement. To assess plans and risks in development, manufacturing, and test activities, we examined program documents and interviewed DOD and contractor officials about changes to the test plan and actions taken to modify these plans to address funding and schedule challenges. This included reviewing and interviewing program and Office of the Secretary of Defense (OSD) officials about changes to development testing that evolved in response to a projected shortfall in management reserves and a goal to stay on schedule toward a full-rate production decision in October 2013. We reviewed information compiled by program officials to document options they considered viable, changes to the test plan and test resources that would occur under a proposed risk reduction option, and challenges/risks to taking this course of action and possible fallback plans. We also reviewed stakeholder views of options and the benefits and challenges of going forward with the changes made to the development test plan. We collected manufacturing cost and work performance data to assess progress against plans, determined reasons for manufacturing delays, discussed program and contractor plans to improve, and expected impacts on development and operational tests. In assessing program cost estimates, we also evaluated the JSF joint program office estimating methodologies, assumptions, and results to determine whether the official cost estimates were comprehensive, accurate, well documented, and credible. We used our draft guide on estimating program schedules and costs, which is based on extensive research of best practices. Our Cost Assessment Guide considers an estimate to be accurate if it is not overly conservative, is based on an assessment of the most likely costs, and is adjusted properly for inflation; comprehensive if its level of detail ensures that all pertinent costs are included and no costs are double-counted; well documented if the estimate can be easily repeated or updated and can be traced to original sources through auditing; and credible if the estimate has been cross- checked with an independent cost estimate and a level of uncertainty associated with the estimate has been identified. We also interviewed the JSF program office’s cost estimating team to obtain a detailed understanding of the cost model and met with the Department of Defense Cost Analysis Improvement Group (CAIG) to understand their methodology, data and approach in developing their Joint Strike Fighter independent cost estimate. We analyzed earned value management (EVM) reports and met with the Naval Air Systems Command and the Defense Contract Management Agency (DCMA) to discuss the EVM data and to obtain their independent cost estimates for JSF development efforts. To assess the validity and reliability of prime contractors’ earned value management systems and reports, we analyzed the EVM reports and reviewed audit reports prepared by the DCMA. To identify future challenges, we continued discussions with DOD and contractor officials on forward-looking plans and areas of emphasis. We analyzed budget requirements from successive plans and tracked contributing factors to changes in budget. We collected information on commonality assessments among the three variants and trends. With Navy and Marine Corps officials, we discussed future plans on the employment and quantity mix of aircraft and identified differences in plans and perspectives. We discussed past and present weight growth issues with engineers and plans for controlling future growth. In performing our work, we obtained information and interviewed officials from the JSF Joint Program Office, Arlington, Virginia; Aeronautical Systems Center, Wright-Patterson Air Force Base, Ohio; Naval Air Systems Command, Patuxent River, Maryland; Defense Contract Management Agency, Fort Worth, Texas; and Lockheed Martin Aeronautics, Fort Worth, Texas. We also met and obtained data from the following OSD offices in Washington, D.C.: Director, Operational Test and Evaluation; Program Analysis and Evaluation; Cost Analysis Improvement Group; Portfolio Systems Acquisition (Air Warfare); and Systems and Software Engineering. We conducted this performance audit from June 2007 to February 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The $299.8 billion acquisition cost estimate for JSF is not reliable because it is not sufficiently comprehensive, accurate, documented, or credible. Cost- estimating organizations throughout the federal government and industry use certain key practices to produce sound cost estimates that are comprehensive and accurate and that can be easily traced, replicated, and updated. GAO’s Cost Assessment Guide outlines practices that, if followed correctly, should result in high-quality, reliable, and valid cost estimates that management can use for making informed decisions. We assessed the methodology used by the JSF program office to determine its development cost estimate against four best practices characteristics, which are that an estimate should be comprehensive, accurate, well documented, and credible. We found that the JSF program office has not followed best practices for developing a reliable and valid life cycle cost estimate because it did not include certain key costs, assumptions used to develop the estimate are overly optimistic, the estimate is not well documented, and no analysis has been done to state the confidence and certainty it has in its estimate. As a result of these weaknesses, the JSF program acquisition cost estimate is not reliable for decision making. Estimates are comprehensive when they contain a level of detail that ensures that all pertinent costs are included and no costs are double-counted. It is important to ensure the completeness, consistency, and realism of the information contained in the cost estimate. Our review of the JSF development cost estimate showed that there are several cost categories totaling more than $10 billion that are excluded or underreported in the program office estimate. These items are summarized in table 5. Alternate engine program. Congress has been interested in DOD developing a second source for the JSF engine to induce competition and to reduce operational risks in the future should the sole engine develop problems requiring the grounding of all JSFs. DOD has not wanted to pursue this second engine source and twice removed funding from the JSF program line. In 2005, DOD deleted a total of about $7.2 billion from the JSF’s development and procurement accounts for the alternate engine. In 2006, it reinserted $340 million for the program, reflecting only development funding in the future years defense program. This omits about $6.8 billion left out of the JSF cost estimate for this program. Military construction. In prior years, the JSF cost estimate included $2 billion for military construction costs. Since the services had not yet fully established basing plans, this amount was a top-level parametric estimate not based on discrete estimates for specific sites. The current December 2006 cost estimate reported military construction costs of $533 million, reflecting only the amount budgeted in the fiscal year 2008 future years defense program. This means that about $1.5 billion in military construction—and possibly more—will eventually be required for specific basing needs of the JSF fleets. DOD will update military construction estimates as the services identify specific site requirements. Tooling. The JSF program recently increased its estimate of tooling costs due to the inclusion of additional tooling requirements and estimating methodology changes. This change is ongoing, and has not yet been included in official program estimates. According to a recent press report, a Lockheed Martin official stated that the full requirement to support procurement by our allies was not adequately factored into prior tooling estimates. The program estimates the additional cost through 2015 at about $2.1 billion. Deferred capabilities. Cost and performance trade-offs during development have resulted in some requirements being deleted from the program cost estimate and deferred until later years. This includes a number of planned capabilities dropped from the final block of development software. The program office has not quantified the cost of these deferred capabilities, and the costs are not reflected anywhere in the program office’s life-cycle cost estimate. Naval Air Systems Command (NAVAIR) officials told us that the total deferred amount could be in the billions of dollars. We note that prior acquisitions such as the Global Hawk and F-22A programs also deferred requirements that would later need additional funding. For example, we reported in 2005 that the Global Hawk program costs did not include $400.6 million in known additional procurement costs for sensors, ground station enhancements, and other items required to achieve the system’s initial full-up capability. These costs had been in the program baseline but were later deferred and reclassified because of cost pressures and schedule changes. Similarly, the Air Force’s $5.9 billion modernization and reliability improvement program includes capabilities deferred from the acquisition program and reliability enhancements needed to correct deficiencies and achieve the level of reliability that was supposed to be accomplished during acquisition. Estimates are accurate when they are based on an assessment of the costs most likely to be incurred. Therefore, when costs change, best practices require that the estimate be updated to reflect changes in technical or program assumptions and new phases or milestones. DOD’s Cost Analysis Improvement Group found that the assumptions the JSF program office used for weight growth, staffing head counts, commonality savings for cousin (similar) parts, and outsourced labor rate savings could be too optimistic, given the program’s complexity. With three variants and three engines (cruise, alternate, and lift) in development, multiple customers, and more than double the amount of operational flight software than the F-22A and four times that of the F/A-18E/F, the JSF acquisition program is substantially more complex than those contemporary systems, and therefore may not merit assumptions that are even as optimistic as the historical data for those programs. The following table shows some major differences in assumptions used by the program office and the CAIG in estimating JSF costs. JSF program officials told us that they use Lockheed Martin earned value management data in creating their estimate of JSF development costs. However, DCMA, which reviews contracts and industrial performance for DOD, identified this data as being of very poor quality, calling into question the accuracy of any estimate based on these data. In November 2007, DCMA issued a report saying that Lockheed Martin’s tracking of cost and schedule information at its aerospace unit in Fort Worth, Texas— where the JSF program is managed—is deficient to the point where the government is not obtaining useful program performance data to manage risks. DCMA said that Lockheed’s earned value data at the Fort Worth facility are not sufficient to manage complex, multibillion-dollar weapon systems acquisition programs. Among other problem areas, DCMA found that Lockheed had not clearly defined roles and responsibilities, and was using management reserve funds to alter its own and subcontractor performance levels and cost overruns. These issues hurt DOD’s ability to use the Lockheed data to determine product delivery dates and develop accurate estimates of program costs. DCMA officials who conducted the review at Lockheed Martin told us that the poor quality of the data invalidated key performance metrics regarding cost and schedule, as well as the contractor’s estimate of the cost to complete the contract. NAVAIR had also raised concerns about Lockheed Martin’s earned value system as early as June 2005, and these officials told us they were in agreement with the findings in the November 2007 DCMA report. NAVAIR officials also said that most deficiencies identified by the DCMA report have the effect of underreporting costs, and that the official program cost estimates will increase if the deficiencies are corrected. Also in 2007, the prime contractor alerted DOD to a billing error involving duplicate charges for the portion of the earned award fee paid to subcontractors. This resulted in $266 million in overcharges. Government officials became concerned that such a large discrepancy could occur without the government’s knowledge and questioned the adequacy of the contractor’s billing system and accounting procedures. DCMA and the Defense Contract Audit Agency were tasked to conduct an investigation. Their investigation found that the overbilling resulted from an accounting system error in the internal handling of award fees on the JSF contract. According to the investigation report, the error that created the overbilling has been corrected, and the government has recouped the overbilled principal and interest. Cost estimates are well documented when they can be easily repeated or updated and can be traced to original sources through auditing. Rigorous documentation increases the credibility of an estimate and helps support an organization’s decision-making process. The documentation should explicitly identify the primary methods, calculations, results, rationales or assumptions, and sources of the data used to generate each cost element. All the steps involved in developing the estimate should be documented so that a cost analyst unfamiliar with the program can recreate the estimate with the same result. We found that the JSF cost model is highly complex and the level of documentation is not sufficient for someone unfamiliar with the program to easily recreate it. Specifically, we found that the program office does not have formal documentation for the development, production, and operating support cost models. Instead, it relies on briefing slides that describe the methodology and the data sources used, but did not provide detailed documentation such as quantitative analysis to support the assumptions that were involved in producing the life-cycle cost estimate. For the development cost estimate, the JSF program office admitted it did not have a cost model that continually updates with actual costs. Instead the program office relies heavily on earned value management data and analysis from Lockheed Martin to update its development cost estimate, but provided us no documentation to back up this claim. Estimates are credible when they have been cross-checked with an independent cost estimate and when a level of uncertainty associated with the estimate has been identified. An independent cost estimate provides the estimator with an unbiased test of the reasonableness of the estimate and reduces the cost risk associated with the project by demonstrating that alternative methods generate similar results. Several independent organizations have reviewed the JSF program and are predicting much higher costs than the program office. Table 7 below provides a summary of these assessments. In 2005, the CAIG performed an independent estimate of JSF program development costs, which include the cost of the Lockheed Martin contract and fees as well as the government’s in-house costs. The CAIG expected that the development phase would cost $5.1 billion more than expected by the program office, measured against the program office’s most recent data available at that time, from the 2004 Selected Acquisition Report (SAR). The CAIG official in charge of the estimate told us that while it has not formally presented an updated estimate to the program office, the order of magnitude of difference between the CAIG and program office estimates remains roughly the same as at the time of the 2005 independent estimate. The variance between the CAIG and program office estimates grew significantly as the program encountered problems. The CAIG explained that some of the $15.2 billion growth in its development cost estimate from Milestone B in 2001 to the December 2004 SAR was due to initial assumptions that 5,000 engineers would be available to work on the three JSF variants. This assumption turned out to be too optimistic since only about 3,000 engineers have been working on the program. Because of fewer people available to support the JSF design and development, the CAIG shifted the program schedule to the right, increasing the costs. The program office, on the other hand, assumed it could get the same effort done with fewer people. In addition, the CAIG used historical data from the F-22A program, including the costs to design the aircraft, test it, and redesign any fixes, and adjusted these data to account for differences in the JSF program, including the three variants. The program office relies mostly on contractor data. When it was awarded the development contract in 2001, Lockheed Martin agreed to develop the JSF aircraft for $16.5 billion, excluding fee. In April 2005, the development program was rebaselined, adding more than $6 billion to reflect funds added to the program due to weight growth issues in 2003. This raised the JSF baseline development contract cost estimate to $23.2 billion, excluding fee. Despite the additional funding to cover preexisting cost and schedule overruns, Lockheed Martin’s JSF development cost and schedule performance has continued to decline over time. As shown in figure 6, cost and schedule variances continued on a downward trend despite the April 2005 rebaseline. As of September 2007, Lockheed Martin was reporting cumulative cost overruns of $305.7 million and was behind schedule to an extent valued at $251.3 million. Key drivers of cost overruns to date have included unfunded requirements for design changes, loss of commonality savings, critical part shortages, high change traffic, inefficient productivity due to performing work out of sequence, constant rework, suppliers’ performance, late release of engineering requirements, a greater than planned effort for designs of the short takeoff and landing and the conventional takeoff and landing variants, and additional radar testing. Some of this cost variance is due to optimistic assumptions at the beginning of the program. For example, cost estimates assumed that only one design iteration would be needed, whereas in reality it takes numerous design iterations before the final designs are determined. Despite its poor performance since the rebaseline, Lockheed Martin was predicting only a $113 million cost overrun at contract completion. This is unrealistic given the persistence and size of the $305.7 million overrun reported in September 2007, at which point the contract was 67 percent complete. In order to achieve a $113 million overrun at completion, Lockheed Martin would have to not only incur no further cost variances from now until completion of the contract, but it would also have to significantly improve its performance. This is unlikely given that studies of more than 700 defense programs have shown limited opportunity for getting a program back on track once it is more than 15 percent to 20 percent complete. The true cost to complete the contract may be significantly greater, as DCMA has expressed its concern to DOD over Lockheed Martin’s failure to regularly update its estimate of the costs to complete the JSF contract, stating that Lockheed’s infrequent updates are insufficient to provide the government with information bearing on potential cost growth and funding needs. Like the CAIG, both DCMA and NAVAIR believe that Lockheed Martin’s estimate at completion is too optimistic and that the program office will most likely require significantly more funding to complete the development program. NAVAIR provides resources to the JSF program office cost-estimating function, and it estimated in 2006 that JSF development costs could be almost $8 billion to $13 billion higher than estimated by the program office, or else cost billions more in procurement due to requirements pushed off from development. NAVAIR officials told us they believe that the 2006 estimate continues to be accurate today, but explained that since the JSF program is a joint program they do not control JSF cost-estimating procedures, although their estimates are briefed to JSF program management. The estimate removed what NAVAIR views as artificial constraints on the JSF schedule and projected forward, resulting in an estimate that the schedule would likely slip 19 to 27 months, and combined this with trends in cost performance. NAVAIR officials said that their confidence in the achievability of the JSF program schedule is low, as the master schedule comprises more than 600 individual schedules, making it difficult to accurately assess the achievability of the overall schedule. DCMA estimates that JSF development could cost as much as $4.9 billion more than program office estimates, accounting for poor cost and schedule performance to date and assuming further schedule slips of up to 12 months. DCMA confirmed that a schedule risk analysis, which uses statistical techniques to obtain a measure of confidence in completing a program, has never been performed on the JSF program. Since historically state-of-the-art development programs have taken longer than planned, a schedule risk analysis should be conducted to determine the level of uncertainty in the schedule. Despite these outside organizations’ predictions of significantly higher costs to complete the JSF contract and the lack of realism in the contractor’s own estimate, the JSF program office continues to use the contractor’s estimate as its own. In addition to expected cost overruns for JSF development, the CAIG is predicting significantly higher costs for JSF for the military services to purchase the aircraft. Using different assumptions about weight growth, labor rates, avionics and propulsion costs, and contractor fees, the CAIG calculated significantly higher unit costs for the aircraft variants (see table 6 earlier in this report for comparison of CAIG and program office assumptions affecting both development and procurement costs estimates). Multiplying these higher unit costs by the expected procurement quantities leads to a more than $33 billion (in constant year 2002 dollars) difference from official program office estimates for procurement costs. The CAIG estimates were briefed in 2006 to the DOD working group that oversees the JSF program, and top OSD officials were aware of the discrepancy between the CAIG and JSF program office estimates. The program office has not conducted an uncertainty analysis on its cost estimates despite the complexity of the program and associated risk and uncertainty. As shown in table 10, the JSF program is significantly more complicated than comparable aircraft development programs. This complexity makes it all the more necessary to fully account for the effect various risks can have on the overall cost estimate. An uncertainty analysis assesses the extent to which the variability of an outcome variable is caused by uncertainty in the input parameters. It should be performed for every cost estimate in order to inform decision makers about the likelihood of success. In performing uncertainty analysis, an organization varies the effects of multiple elements on costs, and as a result, can express a level of confidence in the point estimate. We found that the JSF program has not conducted an uncertainty analysis. Such analysis would provide a range of possible values to program management and an estimate of the likelihood of the various possibilities. Instead, the program office only offers a single point estimate—one dollar figure, with no associated range—and no technical analysis of the likelihood that this estimate is credible. The lead cost estimator for the program office acknowledged that such a single point estimate is virtually certain to be wrong, but also stated that the analysis used to develop a range of values is easily manipulated and therefore not valuable. It is GAO’s view that a point estimate should be accompanied by an estimated confidence level to quantify the uncertainty surrounding the estimate in order for management to make good decisions. Because the JSF program office has not conducted an uncertainty analysis, it is unable to provide Congress with any confidence level for its point estimate of approximately $300 billion for JSF acquisition. Joint Strike Fighter: Impact of Recent Decisions on Program Risks. GAO-08-569T. Washington, D.C.: March 11, 2008 Tactical Aircraft: DOD Needs a Joint and Integrated Strategy. GAO-07-415. Washington, D.C.: April 2, 2007. Defense Acquisitions: Assessments of Selected Major Weapon Programs. GAO-07-406SP. Washington D.C.: March 30, 2007. Best Practices: An Integrated Portfolio Management Approach to Weapon System Investments Could Improve DOD’s Acquisition Outcomes. GAO-07-388. Washington, D.C.: March 30, 2007. Defense Acquisitions: Analysis of Costs for the Joint Strike Fighter Engine Program. GAO-07-656T. Washington, D.C.: March 22, 2007. Joint Strike Fighter: Progress Made and Challenges Remain. GAO-07-360. Washington, D.C.: March 15, 2007. Systems Acquisition: Major Weapon Systems Continue to Experience Cost and Schedule Problems under DOD’s Revised Policy. GAO-06-368. Washington, D.C.: April 13, 2006. Defense Acquisitions: Actions Needed to Get Better Results on Weapon Systems Investments. GAO-06-585T. Washington, D.C.: April 5, 2006. Tactical Aircraft: Recapitalization Goals Are Not Supported by Knowledge-Based F-22A and JSF Business Cases. GAO-06-487T. Washington, D.C.: March 16, 2006. Joint Strike Fighter: DOD Plans to Enter Production before Testing Demonstrates Acceptable Performance. GAO-06-356. Washington, D.C.: March 15, 2006. Tactical Aircraft: F/A-22 and JSF Acquisition Plans and Implications for Tactical Aircraft Modernization. GAO-05-519T. Washington, D.C.: April 6, 2005. Tactical Aircraft: Opportunity to Reduce Risks in the Joint Strike Fighter Program with Different Acquisition Strategy. GAO-05-271. Washington, D.C.: March 15, 2005.
The Joint Strike Fighter (JSF) program seeks to produce and field three aircraft variants for the Air Force, Navy, Marine Corps, and eight international partners. The estimated total investment for JSF now approaches $1 trillion to acquire and maintain 2,458 aircraft. Under congressional mandate, GAO has annually reviewed the JSF program since 2005. GAO's prior reviews have identified a number of issues and recommended actions for reducing risks and improving the program's outcomes. This report, the fourth under the mandate, focuses on the program's progress in meeting cost, schedule, and performance goals; plans and risks in development and test activities; the program's cost-estimating methods; and future challenges facing the program. To conduct its work, GAO identified changes in cost and schedule from prior years and their causes, evaluated development progress and plans, assessed cost-estimating methodologies against best practices, and analyzed future budget requirements. Since last year's report, the JSF program office estimates that total acquisition costs increased by more than $23 billion, primarily because of higher estimated procurement costs. The JSF development cost estimate stayed about the same. Development costs were held constant by reducing requirements, eliminating the alternate engine program, and spending management reserve faster than budgeted. Facing a probable contract cost overrun, DOD implemented a Mid-Course Risk Reduction Plan to replenish management reserves from about $400 million to about $1 billion by reducing test resources. Progress has been reported in several important areas, including partner agreements, first flights of a JSF prototype and test bed, and a more realistic procurement schedule. The midcourse plan carries the risk of design and performance problems not being discovered until late in the operational testing and production phases, when it is significantly more costly to address such problems. The plan also fails to address the production and schedule concerns that depleted management reserves. Cost and schedule pressures are mounting. Two-thirds of budgeted funding for JSF development has been spent, but only about one-half of the work has been completed. The contractor is on its third, soon to be fourth, manufacturing schedule, but test aircraft in manufacturing are still behind, the continuing impacts of late designs, delayed delivery of parts, and manufacturing inefficiencies. We believe that JSF costs will likely be much higher than reported. The estimates do not include all costs, including about $6.8 billion for the alternate engine program. In addition, some assumptions are overly optimistic and not well documented. Three independent defense offices separately concluded that program cost estimates are understated by as much as $38 billion and that the development schedule is likely to slip from 12 to 27 months. Discrepancies in cost estimates add to program risks and hinder congressional oversight. Even so, DOD does not plan for another fully documented, independent total program life-cycle cost estimate until 2013. As JSF finalizes the three designs, matures manufacturing processes, conducts flight tests, and ramps up production, it faces significant challenges. JSF's goal--to develop and field an affordable, highly common family of strike aircraft--is threatened by rising unit procurement prices and lower commonality than expected. The program also makes unprecedented funding demands--an average of $11 billion annually for two decades--and must compete with other defense and nondefense priorities for the shrinking federal discretionary dollar. Further, expected cost per flight hour now exceeds that of the F-16 legacy fighter, one of the aircraft it is intended to replace. With almost 90 percent (in terms of dollars) of the acquisition program still ahead, it is important to address these challenges, effectively manage future risks, and move forward with a successful program that meets our and our allies' needs.
The MEP program traces its origins to the Manufacturing Technology which was established by NIST’s predecessor, the Centers Program,National Bureau of Standards (NBS). In July 1988, NBS published the first of 12 federal funding opportunity announcements that have been issued to date for the establishment of MEP centers.announcement led to the establishment of the first 3 centers in 1989, as part of an initial pilot program. By 1990, NBS had become NIST, and the agency published the second federal funding opportunity announcement for the establishment of two additional centers, bringing the total number of centers to 5. In 1992, NIST announced a federal funding opportunity for 2 more centers, bringing the total to 7. The number of centers grew rapidly thereafter, with a nationwide network of 44 centers in place by 1995. NIST has since added to its network and, as of 2013, has 60 centers that cover all 50 states and Puerto Rico. Appendix I provides a list of the 60 MEP centers. The original legislation authorizing the MEP program emphasized the transfer of advanced technologies developed within NIST and other federal laboratories to small and medium-sized manufacturing firms. As we reported in 1991, however, the centers soon found that firms primarily needed proven, not advanced, technologies because advanced technologies were generally expensive, untested, and too complex to be practical for most small manufacturing firms. a key mandate of the program was not realistically aligned with the basic needs of most small manufacturing firms. In recognition of this situation, NIST reoriented the program to focus on basic technologies that permitted firms to improve their competitive position. By the time we reported on the program in 1996, centers were providing a wide range of business services, including helping companies solve individual manufacturing problems, obtain training for their workers, create We reported, therefore, that marketing plans, and upgrade their equipment and computers. GAO, Technology Transfer: Federal Efforts to Enhance the Competitiveness of Small Manufacturers, GAO/RCED-92-30 (Washington, D.C.: Nov. 22, 1991). Next Generation Strategy and, in December 2008, NIST released its current strategic plan referred to by that name. The plan articulates NIST’s new vision for the program as a catalyst for accelerating manufacturing’s transformation into a “more efficient and powerful engine of innovation driving economic growth and job creation.” The plan also defines the program’s mission: “to act as a strategic advisor to promote business growth and connect manufacturing firms to public and private resources essential for increased competitiveness and profitability.” The plan focuses the program’s activities around the following five strategic areas: Continuous Improvement. This area includes enhancing manufacturing firms’ productivity and freeing up their capacity to provide them a stable foundation to pursue innovation and growth through services and programs that target manufacturing plant efficiencies. Technology Acceleration. This area includes developing tools and services to bring new product and process improvement opportunities to manufacturing firms, accelerating firms’ opportunities to leverage and adopt technology, connecting firms with technology opportunities and solutions, and making available a range of product development and commercialization assistance services. Supplier Development. This area includes developing and delivering the national capacity, tools, and services needed to put suppliers in a position to thrive in existing and future global supply chains. Sustainability. This area includes helping companies gain a competitive edge by reducing environmental costs and impact by developing new environmentally focused materials, products, and processes to gain entry into new markets. Workforce. This area includes developing and delivering training and workforce assistance to manufacturing firms, as well as expanding partnerships and collaborations to develop and deliver tools and services to foster the development of progressive managers and entrepreneurial CEOs. MEP centers work with manufacturing firms to plan and implement projects in these and other areas. For example, in 2011, the Delaware Valley Industrial Resource Center (DVIRC)—a MEP center in Pennsylvania—worked with a manufacturing firm in Hatfield, Pennsylvania, on a continuous improvement project when the company faced price increases from its vendors it did not want to pass on to its customers. DVIRC trained company staff on methods to achieve efficiencies and helped identify areas for improvement in the company’s production process, resulting in increased productivity and reduced inventory levels that allowed the company to save space and lower costs, as reported by the manufacturing firm. Similarly, the Texas MEP center worked with a manufacturing firm in El Paso, Texas, on a sustainability project in 2013. The MEP center and the firm partnered with New Mexico State University’s Institute for Energy and the Environment on an economy, energy, and environment (E3) project that included training and an effort to identify inefficiencies in the manufacturing process. As a result of this partnership, the firm reported saving 40,000 gallons of water and reducing solid waste by 56 tons, among other accomplishments. NIST has recently begun developing a new strategic planning process for the MEP program to update its Next Generation Strategy. According to NIST officials, the process will include extensive participation by stakeholders, including MEP centers. NIST expects to implement the planning process through the spring of 2014 and release an updated strategic plan shortly after the planning process is complete. The program has also evolved in its matching fund requirements. The program as originally implemented provided federal funding to reimburse each $1 of nonfederal contributions with no more than $1 of federal funding—referred to as a 1:1 cost share—for the first 3 years that a center operated. For the fourth year of operation, every $3 of nonfederal contributions were reimbursed with $2 of federal funding—referred to as a 3:2 cost share. For the fifth and sixth years of operation, every $2 of nonfederal contributions were reimbursed with $1 of federal funding— referred to as a 2:1 cost share. Under the original legislation, federal funding was scheduled to end once a center had operated for 6 years. The 6-year federal funding limit was temporarily suspended by the fiscal year 1997 and 1998 appropriations acts and was eliminated in 1998 when Congress passed legislation changing the program to, among other things, provide for continued federal funding and set the cost share at 2:1 for all centers that had been in operation for at least 6 years. NIST spent $608.3 million in federal funding on the MEP program in fiscal years 2009 through 2013 and used most of these funds to directly support MEP centers and their work with manufacturing firms. Specifically, NIST spent $494.6 million on cooperative agreement awards and competitive grant awards to MEP centers, which NIST considers to be direct support. NIST spent $78 million for contracts, and for NIST staff salaries and benefits, some of which NIST considers direct support and some administrative spending. The remaining $35.7 million was spent for agency-wide overhead charges and travel, training, performance awards, and other items, all of which NIST considers administrative spending. NIST defines direct support as spending that directly supports the MEP center system’s work with manufacturing firms, such as awards to centers or spending on training for MEP center staff. NIST considers all other spending to be administrative, including spending on performance evaluations of centers and on agency-wide overhead charges that pay for facilities operations and maintenance at the NIST campus. NIST is not required to track, and historically has not tracked, administrative spending, but NIST officials told us the agency developed its definitions of direct support and administrative spending in fiscal year 2013 in response to congressional interest. It then conducted an analysis of fiscal year 2013 federal MEP program spending using those definitions. NIST estimated that about 88.5 percent of federal MEP program spending in fiscal year 2013 was for direct support, and the remaining 11.5 percent was administrative. (See fig. 2.) It is not possible to determine whether NIST’s amount of administrative spending is appropriate because there is no standard definition of administrative spending for federal programs. In addition, conducting the analysis using different definitions could produce different results. There is no standard definition of administrative expenses for federal programs. Executive Order 12837, issued on February 10, 1993, called on the Director of the Office of Management and Budget (OMB) to establish a definition of administrative expenses for agencies, but OMB did not develop a definition. Definitions and reporting of administrative expenses vary across public and private entities depending on their mission, priorities, services, clients, and on the purposes for which management needs the information. NIST spent $471 million on MEP center cooperative agreement awards in fiscal years 2009 through 2013. NIST considers all of this spending direct support. Federal funds for center cooperative agreements can be used by MEP centers for capital and operating and maintenance expenses. As stated earlier, these funds are awarded contingent on each MEP center meeting its cost-share requirement and having positive performance evaluations. NIST officials told us that spending on cooperative agreements and spending on staff salaries and benefits are NIST’s top spending priorities. In fiscal years 2009 through 2013, NIST spent $23.6 million on awards to centers that were granted on a competitive basis. These awards are made in addition to cooperative agreement awards. The bulk of these awards went to two competitive grant programs; about $12.7 million was awarded to MEP centers through the Expansion of Services Cooperative Agreement Recipients (E-CAR) competition, and about $7.3 million was awarded through the Tool Development Cooperative Agreement Recipients (T-CAR) competition. NIST also awarded a small amount through grant competitions conducted under two other programs: the Advanced Manufacturing Jobs and Innovation Accelerator Challenge, and the Energy-Efficient Buildings Hub project.the competitive grants that it awarded under the four programs encourage projects in the five strategic areas identified in the MEP program’s strategic plan. For example, NIST awards to MEP centers through the E- CAR competition funded 14 projects designed to integrate two or more of the five strategic areas, and NIST awards through the T-CAR competition funded 8 projects aimed at addressing the new and emerging needs of manufacturing firms in any of the strategic areas. In fiscal years 2009 through 2013, NIST spent $45.9 million on contracts for goods or services, some of which directly supported MEP centers and some of which were administrative. Of the $8 million spent on such contracts in fiscal year 2013, NIST estimated that it spent $5 million on direct support contracts and $3 million on administrative contracts. The contracts that NIST considered direct support in fiscal year 2013 were for training and support for MEP centers on tools that centers could use to assist manufacturing firms, or for work with centers on implementing MEP initiatives. For example, NIST awarded a $3.7 million contract to International Management and Consulting, LLC, to provide training for MEP center staff on innovation engineering, which NIST describes as a business support service that informs companies how to quickly assess innovative ideas resulting in new business models, processes, and products. The training is intended to help MEP center staff provide guidance in these areas to clients. Contracts that NIST considered administrative in fiscal year 2013 were for services related to performance evaluations of MEP centers, telephone and mobile broadband, information technology (IT), and other products and services. According to NIST officials, some of these contracts helped the program meet legal requirements, such as for services related to performance evaluations of MEP centers, which are required by the program’s enabling legislation and implementing regulations. Other contracts supported areas of the program’s strategic plan, such as the contracts for MEP center staff training. Finally, some of the contracts were for operational functions, such as the contracts for telephone and mobile broadband and IT. NIST staff told us the program is currently reviewing all large direct support contracts with the intent of reducing contract spending and directing more funds to MEP centers. They expect the review to be complete in spring 2014. NIST spent $32.1 million on staff salaries and benefits in fiscal years 2009 through 2013. As of fiscal year 2013, NIST employed 55 staff under the MEP program. According to NIST’s definitions, some of its staff directly supported MEP centers, and some were administrative. In fiscal year 2013, NIST estimated that it spent $2.9 million on direct support staff, and $4.4 million on administrative staff. As shown in figure 3, the direct support staff worked in NIST’s strategic partnerships team, as well as in its program development and system operations offices for the MEP program. In NIST’s 2013 analysis of administrative spending, NIST considered the strategic partnerships and program development staff to be entirely dedicated to directly supporting MEP centers, and the system operations staff to be half dedicated to directly supporting MEP centers and half dedicated to program administration. NIST considered the other six units and teams in the MEP program to be dedicated to program administration, including the Director and the administration and finance team. NIST officials told us that they increased spending for staff salaries and benefits during the past 5 years, in part to return the program’s staffing level to that before substantial budget cuts in fiscal year 2004. New hiring focused on staff with expertise in areas of the program’s strategic plan, according to these officials. NIST spent $30.6 million in federal MEP program funds in fiscal years 2009 through 2013 on agency-wide overhead required by NIST. NIST considers this spending to be administrative. NIST does not receive an appropriation for the costs of agency-wide general administration; instead it levies surcharges on programs to pay overhead costs, including the operation and maintenance of facilities, grants management, and mail distribution. NIST spent the remainder of its federal funds—$5.1 million in fiscal years 2009 through 2013—on travel, training, staff performance awards, and other items. NIST considers all of this spending to be administrative. According to NIST’s travel tracking spreadsheet, fiscal year 2013 travel included participation in on-site panel reviews of MEP centers, attendance at MEP center board meetings, and attendance at meetings with state and federal partners, among other things. Officials told us training funds are used for continuing education and professional training for MEP program staff, as opposed to training for MEP center staff. For example, some program staff hold professional credentials, such as a Contracting Officer Technical Representative, which require periodic training to be maintained. Performance award spending is used for discretionary bonuses and cash awards paid to MEP program staff for performance and to NIST staff outside the program, such as legal counsel, for exemplary support of the program. Table 1 summarizes the spending described above. NIST’s spending on cooperative agreement awards is based on the historical amount awarded to each center when it was established. This took into account each center’s identification of target manufacturing firms in its service area—including characteristics such as business size, industry types, product mix, and technology requirements—and its costs of providing services to those manufacturing firms. However, because NIST made the awards on an incremental basis to individual centers serving different areas over a period of more than 15 years, NIST’s awards to individual centers did not take into account variations across different service areas in the demand for program services—a function of the number and characteristics of target manufacturing firms—or variations across different service areas in costs of providing services. NIST’s cooperative agreement award spending is, therefore, inconsistent This standard—which is commonly with the beneficiary equity standard.used in social science research to design and evaluate funding formulas—calls for funds to be distributed in a way that takes these variations into account so that centers can provide the same level of services to each target manufacturing firm, according to its needs. Because NIST did not account for these variations across service areas, NIST’s cooperative agreement award spending may not allow centers to provide the same level of services to target manufacturing firms, according to their needs. NIST officials told us that an analysis they recently conducted showed a wide variation across centers in the relationship between their cooperative agreement award amounts and the number of target manufacturing firms in their service areas. NIST officials told us they are exploring ways to revise NIST’s cooperative agreement award spending to take into account variations across service areas in the number of target manufacturing firms, among other factors. NIST’s spending on cooperative agreement awards is based on the historical amount awarded to each center when it was established. Most of the currently operating centers were established between 1989 and 1996, according to our analysis of NIST data estimating the establishment dates of current centers. When the centers were established, their original award amounts were based on the proposals that they submitted in response to NIST’s federal funding opportunity announcements. For all but the first federal funding opportunity announcement, NIST specified that it would evaluate proposals by assigning scores to the following equally weighted criteria:Identification of target firms in the proposed region. The proposals had to demonstrate an understanding of the service area’s manufacturing base, including concentration of industry, business size, industry types, product mix, and technology requirements, among other things. Technology resources. The proposals had to assure strength in technical personnel and programmatic resources, full-time staff, facilities, equipment, and linkages to external sources of technology, among other things. Technology delivery mechanisms. The proposals had to define an effective methodology for delivering advanced manufacturing technology to manufacturing firms, among other things. Management and financial plan. The proposals had to define a management structure and assure management personnel to carry out development and operation of an effective center, among other things. Budget. The proposals had to contain a detailed 1-year budget and budget outline for subsequent years, among other things. For funding opportunity announcements that NIST published after it issued its 2008 strategic plan, these criteria were to be discussed in the context of the proposer’s ability to align the proposal with the program’s strategic objectives. The announcements stated that, after scoring the proposals, NIST would select award recipients based upon their score ranking and other factors such as availability of federal funds and the need to assure appropriate regional distribution. After centers were established, their subsequent cooperative agreement awards have remained at the historical amount when they were renewed each year. According to NIST officials, in some instances, centers’ cooperative agreements are not renewed and are instead opened to recompetition; during fiscal years 2009 to 2013, eight cooperative agreements were opened to recompetition. NIST officials told us that recompetitions typically occur because the existing center has voluntarily closed or the organization has decided its mission no longer supports running a MEP center. According to NIST’s funding opportunity announcements, NIST used the same evaluation criteria discussed above to select new centers and establish their awards. Unlike renewed cooperative agreement awards, which remain at the historical amount each year, recompeted awards are based on, but can be greater than, the historical amount. NIST officials told us that they use the historical amounts as a baseline in establishing the recompeted award amounts, but they may make additional funding available for the recompetitions. This was the case for all but one of the eight recompetitions that took place during fiscal years 2009 to 2013. According to NIST officials, during these years, NIST reserved additional funds for seven of the recompetitions to accommodate compelling proposals such as those that identified increased matching funds or broadened the work historically done in the service area. All but one of those seven recompetitions led to award amounts greater than the historical amount. In addition to renewing existing awards and recompeting awards when a center has closed, NIST officials told us that NIST recently added a new center to the nationwide system and based the new award on the historical amount awarded for the area. Specifically, in 2012, a new center was added in South Dakota. Previously, the MEP center located in North Dakota served both North and South Dakota and received separate cooperative agreement awards for each. Serving both states proved to be difficult for the center, however, and most of its activity was focused in North Dakota. According to NIST officials, the state of South Dakota suggested to NIST the addition of a new South Dakota center. Through a competitive process, the new South Dakota center received an award equal in amount to the award that the North Dakota center previously received to serve South Dakota. The North Dakota center received an award equal in amount to the award it previously received to serve North Dakota. NIST’s spending on cooperative agreement awards to MEP centers does not account for variations across centers’ service areas in terms of the demand for program services, which is a function of the number and characteristics of target manufacturing firms. As a result, NIST’s cooperative agreement award spending falls short of a component of beneficiary equity—a standard commonly used to design and evaluate funding formulas— that calls for funds to be distributed in a way that takes into account these variations so that each center can provide the same level of services to each target manufacturing firm, according to its needs. The original awards were made in part on the basis of each center’s identification of target manufacturing firms in its service area, including characteristics such as business size, industry types, product mix, and technology requirements, among other things. NIST’s funding opportunity announcements published in June 1995, May 1996, July 2000, March 2001, and March 2003 specified that award amounts should be directly related to the level of activity of the center, which is a function of the number of manufacturing firms in the designated service area. Because most of the current MEP center cooperative agreements were made on an incremental basis over a period of more than 15 years, they did not take into account the distribution of demand for program services across service areas. NIST officials told us they recognize that, as a result of the incremental addition of centers, wide variations emerged across centers in the relationship between their cooperative agreement award amounts and the number of target manufacturing firms in their service areas. Specifically, NIST officials told us that an analysis they recently conducted of current cooperative agreement award amounts per target manufacturing firm across service areas showed a mean of $333 per target manufacturing firm and a range of $82 to $972, with 75 percent of centers falling between $179 and $487. As a result, centers may not be able to provide the same level of services to each target manufacturing firm, according to its needs. NIST’s spending on cooperative agreement awards also does not take into account variations in MEP centers’ costs of providing services to target manufacturing firms. As a result, NIST’s cooperative agreement award spending falls short of another component of beneficiary equity. Under the beneficiary equity standard, funds should be distributed in a way that accounts for variations in the cost of providing services in each area, so that target manufacturing firms across MEP center service areas may receive the same level of assistance, according to their needs. The costs of operating the centers to provide assistance to manufacturing firms affect the amount of funding that centers have available for direct assistance to firms. According to NIST’s funding opportunity announcements, costs—as presented by the centers’ budgets—were considered in making the original awards, but these costs were presented on an incremental basis over a period of more than 15 years and, therefore, NIST’s consideration of these costs did not account for variations across service areas. By not accounting for these variations, NIST’s cooperative agreement award spending may further call into question centers’ ability to provide the same level of services to each target manufacturing firm, according to its needs. NIST officials told us they are exploring ways to revise cooperative agreement award spending to take into account variations across service areas in the number of target manufacturing firms, among other factors. The officials discussed various options they are considering, but they did not identify an option they had agreed to implement or a timeline for decision making and implementation. They stated that one option they are considering is to provide increased awards to those centers that are currently underfunded relative to the mean relationship between centers’ cooperative agreement award amounts and the number of manufacturing firms in their service areas. They told us that doing so would result in a greater benefit in terms of manufacturing firms served than providing additional funds to centers that are overfunded relative to the mean. NIST estimates that if it were to increase cooperative agreement award amounts for the underfunded centers, the program would see up to a 20 percent increase in the number of manufacturing firms served in these service areas over a 3-year period. NIST officials told us that they face at least two impediments in revising cooperative agreement award spending. First, they stated that revising cooperative agreement award spending within the current level of funding would likely mean taking funds from some centers to give to others, and NIST is concerned about the effect this disruption might have on the impact of the program. The officials told us that they would like to increase NIST’s total cooperative agreement award spending and that they are exploring options to do so. They said they are examining their spending on direct support contracts to determine whether cost savings can be realized and redirected to cooperative agreement awards. They also said that they are considering making any changes over a multiyear period. Our prior work has shown that phasing in changes to funding levels gradually over a number of years minimizes disruptions to funding recipients by providing them time to adjust. The second impediment that the officials identified is the requirement in the MEP program’s authorizing legislation that federal cooperative agreement funds provided to MEP centers after their sixth year of operation not exceed one-third of their capital and annual operating and maintenance costs. This requirement leaves the centers responsible for raising the remaining two-thirds of matching funds from other sources. NIST officials told us that many centers already face difficulties raising the required two-thirds of matching funds and may not be able to raise the additional funds needed to access an increased cooperative agreement award. Manufacturing plays a key role in the U.S. economy, and NIST has established a nationwide system of MEP centers dedicated to supporting and strengthening the U.S. manufacturing base. However, because NIST’s cooperative agreement award spending does not take into account variations across service areas in the demand for program services—a function of the number and characteristics of target manufacturing firms—or variations in MEP centers’ costs of providing services, centers may not be able to provide the same level of services to each target manufacturing firm, according to its needs. NIST officials told us they are exploring ways to revise cooperative agreement award spending to take into account variations across service areas in the number of target manufacturing firms, among other factors. Revising NIST’s cooperative agreement award spending poses challenges because it could result in award decreases for some centers, along with increases for others. However, there are ways to ease the transition, such as phasing in changes gradually to minimize disruption to centers and the manufacturing firms they serve. To ensure that NIST’s spending on cooperative agreement awards to MEP centers is more equitable to manufacturing firms in different service areas, we recommend that the Secretary of Commerce revise the program’s cooperative agreement award spending to account for variations across service areas in: the demand for program services—a function of the number and characteristics of target manufacturing firms—and MEP centers’ costs of providing services. We provided a draft of this report to the Department of Commerce’s NIST for review and comment. In its written comments, reproduced in appendix II, NIST generally agreed with our findings and recommendation. In commenting on our recommendation, NIST stated that information in our report could help NIST continue to efficiently operate the MEP program. NIST also provided technical comments, which we incorporated into the report as appropriate. We are sending a copy of this report to the Secretary of Commerce, the appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Name Alaska Manufacturing Extension Partnership Colorado Association for Manufacturing and Technology Connecticut State Technology Extension Program Iowa Center for Industrial Research and Services Illinois Manufacturing Excellence Center-Chicago Region Manufacturing Extension Partnership of Louisiana North Carolina Industrial Extension Service North Dakota Manufacturing Extension Partnership New Hampshire Manufacturing Extension Partnership New Jersey Manufacturing Extension Partnership New Mexico Manufacturing Extension Partnership Empire State Development’s Division of Science, Technology and Innovation (NYSTAR) According to NIST officials, the Alaska Manufacturing Extension Partnership closed in September 2013. According to NIST officials, the Florida Manufacturing Extension Partnership is expected to close by March 2014. According to NIST officials, the Maryland MEP was established in July 2013. According to NIST officials, Rhode Island Manufacturing Extension Services was established in February 2013. According to NIST officials, South Dakota Manufacturing and Technology Solutions was established in January 2013. In addition to the individual named above, Susan Quinlan (Assistant Director), Greg Dybalski, Kim Frankena, Cindy Gilbert, Mark M. Glickman, Paul Kinney, Cynthia Norris, Marietta Mayfield Revesz, Emmy Rhine Paule, William B. Shear, Barbara Timmerman, and Jack Wang made key contributions to this report.
Manufacturing plays a key role in the U.S. economy. Congress established the MEP program in NIST in 1988. The program's objectives are to enhance productivity and technological performance and to strengthen the global competitiveness of target manufacturing firms, namely small and medium-sized U.S.-based firms. Under this program, NIST partners with 60 nonfederal organizations called MEP centers. The centers, located in 50 states and Puerto Rico, help target firms develop new customers and expand capacity, among other things. NIST awards federal funding to centers under annually renewed cooperative agreements, subject to the centers providing matching funds and receiving a positive performance evaluation. The Consolidated and Further Continuing Appropriations Act, 2013, mandated GAO to report on MEP program administrative efficiency, which relates to funding provided to centers. This report (1) describes, over the past 5 years, how NIST spent federal MEP program funds and (2) examines the basis for NIST's cooperative agreement award spending. To conduct this work, GAO analyzed obligations data, reviewed relevant legislation and policies, and interviewed NIST officials. Of the approximately $608 million spent by the Department of Commerce's (Commerce) National Institute of Standards and Technology (NIST) in fiscal years 2009 through 2013 on the Manufacturing Extension Partnership (MEP) program, NIST used most of the funds to directly support MEP centers. Specifically, NIST spent about $495 million on awards to centers and spent the rest on contracts, staff, agency-wide overhead charges, and other items, some of which NIST considered direct support and some of which NIST considered administrative spending. Although NIST is not required to track, and has not historically tracked, administrative spending, NIST officials told GAO the agency developed definitions of direct support and administrative spending in fiscal year 2013 in response to congressional interest, then conducted an analysis of fiscal year 2013 federal MEP program spending using those definitions. NIST defines direct support spending as spending that directly supports the MEP center system's work with manufacturing firms, such as awards to centers or contracts to train MEP center staff on how to quickly assess innovative ideas for new products. NIST considers all other spending to be administrative, including spending on performance evaluations for MEP centers or on agency-wide overhead fees that pay for facilities operations and maintenance at the NIST campus. Using these definitions, NIST estimated that about 88.5 percent of federal MEP program spending in fiscal year 2013 was for direct support, and the remaining 11.5 percent was for administration. NIST's spending on cooperative agreement awards is based on the historical amount awarded to each center when it was established. This took into account each center's identification of target manufacturing firms in its service area—including characteristics such as business size, industry types, product mix, and technology requirements—and its costs of providing services to those firms. However, because NIST made the awards on an incremental basis to individual centers serving different areas over a period of more than 15 years, NIST's awards did not take into account variations across service areas in the demand for program services—a function of the number and characteristics of target firms—or variations across service areas in costs of providing services. NIST's cooperative agreement award spending is, therefore, inconsistent with the beneficiary equity standard. This standard—commonly used to design and evaluate funding formulas—calls for funds to be distributed in a way that takes these variations into account so that centers can provide the same level of services to each target manufacturing firm, according to its needs. Because NIST did not account for these variations across service areas, NIST's cooperative agreement award spending may not allow centers to provide the same level of services to target manufacturing firms, according to their needs. NIST officials told GAO that an analysis they recently conducted showed wide variation across centers in the relationship between their cooperative agreement award amounts and the number of target manufacturing firms in their service areas. NIST officials told GAO they are exploring ways to revise NIST's cooperative agreement award spending to take into account variations across service areas in the number of target manufacturing firms, among other factors. The officials discussed various options they are considering, but they did not identify an option they had agreed to implement or a timeline for decision making and implementation. GAO recommends that Commerce's spending on cooperative agreement awards be revised to account for variations across service areas in demand for program services and in MEP centers' costs of providing services. Commerce agreed with GAO's recommendation.
Mr. Chairman and Members of the Subcommittee: I am pleased to be here today to discuss recent Medicaid spending trends and their potential implications for future outlays. My comments are based on work that we have in progress at the request of the Chairmen of the Senate and House Budget Committees. Their request was prompted by an interest in what contributed to the precipitous drop in the annual growth rate of Medicaid spending from over 20 percent in the early 1990s to 3.3 percent in fiscal year 1996. In addition, you have asked us to comment on aspects of the administration’s fiscal year 1998 proposal for the Medicaid program. My remarks today focus on two broad issues: (1) key factors that explain the 3.3-percent growth rate in fiscal year 1996 and their implications for future Medicaid spending and (2) the administration’s proposal to contain Medicaid cost growth through decreases in disproportionate share hospital (DSH) payments and per capita caps, and to increase state flexibility. Our findings are based on our analysis of Medicaid expenditure data published by the Department of Health and Human Services’ Health Care Financing Administration and our review of federal outlays as reported by the Department of the Treasury. We also contacted Medicaid officials in 18 states that represent a cross-section of state spending patterns over the past 2 years and that account for almost 70 percent of Medicaid expenditures. Our comments on the administration’s proposal are based on a review of budget documents and previous work we have conducted. in the 1996 growth rate, so a variety of factors—such as a downturn in the economy—could result in increased growth rates in subsequent years. Finally, the administration’s proposal for Medicaid reform would further control spending by reducing DSH expenditures and imposing a per capita cap, while providing the states greater flexibility in program policy and administration for their managed care and long-term care programs. These initiatives should produce cost savings. However, in controlling program spending, attention should be given to targeting federal funds appropriately and ensuring that added program flexibility is accompanied by effective federal monitoring and oversight. Medicaid, a federal grant-in-aid program that states administer, finances health care for about 37 million low-income people. With total federal and state expenditures of approximately $160 billion in 1996, Medicaid constitutes a considerable portion of both state and federal budgets, accounting for roughly 20 percent and 6 percent of total expenditures, respectively. For more than a decade, the growth rate in Medicaid expenditures nationally has been erratic. Between 1984 and 1987, the annual growth rates remained relatively stable, ranging between roughly 8 and 11 percent. Over the next 4 years, beginning in 1988, annual growth rates increased substantially, reaching 29 percent in 1992—an increase of over $26 billion for that year. From this peak, Medicaid’s growth rates declined between 1993 and 1995 to approximately the levels of the mid-1980s. Then, in fiscal year 1996, the growth rate fell to 3.3 percent. In analyzing the growth rate for 1996 we found that no single spending growth pattern was evident across the states nor did we find a single factor that explained the decrease in growth. Rather there was a confluence of factors, some of which are unlikely to recur, while others are part of a larger trend. Future spending will potentially be higher if the economy weakens and as the elderly population continues to grow. to the next because of major changes in program structure or accounting variances that change the fiscal year in which a portion of expenditures is reported. To determine the stability of the growth rates among states, we compared states’ growth rates in fiscal year 1995 with those in fiscal year 1996. Our analysis showed that states could be placed in one of five categories, as shown in table 1. (See app. I for specific state growth rates.) Ten states that collectively account for 16 percent of 1996 federal outlays experienced substantial decreases in fiscal year 1996 growth compared with fiscal year 1995’s. However, 80 percent of 1996 federal Medicaid outlays were in states that either experienced moderate decreases or minimal changes in their fiscal year 1996 growth. Although five states’ fiscal year 1996 growth rates increased, those states did not have much effect on spending growth patterns because their combined share of Medicaid outlays is only 4 percent. many states’ growth rates. The convergence of these factors resulted in the historically low 3.3-percent growth rate in fiscal year 1996 Medicaid spending. The growth rate changes in those states that experienced large decreases in 1996 were largely attributable to three factors not expected to recur: substantial decreases in DSH funding, slowdowns in state-initiated eligibility expansions, and accelerated 1995 payments in reaction to block grant proposals for Medicaid. In 1991 and 1993, the Congress acted to bring under control DSH payments that had grown from less than $1 billion to $17 billion in just 2 years. After new limits were enacted, DSH payments nationally declined in 1993, stabilized in 1994, and began to grow again in 1995. An exception to this pattern, however, was Louisiana—a state that has had one of the largest DSH programs in the nation. It experienced a substantial decrease in its 1996 growth rate as its DSH payments continued to decline. The state’s federal outlays decreased by 16 percent in 1996 because of a dramatic drop in DSH payments. Recent slowdowns in state-initiated eligibility expansions also helped to effect substantial decreases in the growth rates in selected states. Over the past several years, some states implemented statewide managed care demonstration waiver programs to extend health care coverage to uninsured populations not previously eligible for Medicaid. Three states that experienced substantial decreases in their 1996 growth rates—Hawaii, Oregon, and Tennessee—undertook the bulk of their expansions in 1994. The expenditure increases related to these expansions continued into 1995 and began to level off in 1996. Tennessee actually experienced a drop in the number of eligible beneficiaries in 1996, as formerly uninsured individuals covered by the program lost their eligibility because they did not pay the required premiums. aggregate Medicaid spending limits, which would be calculated using a base year. Officials from a few states told us that, in response to the anticipated block grant, they accelerated their Medicaid payments to increase their expenditures for fiscal year 1995—the year the Congress was considering for use as the base. For example, one state, with federal approval, made a DSH payment at the end of fiscal year 1995 rather than at the beginning of fiscal year 1996. An official from another state, which had a moderate decrease in growth, told us that the state expedited decisions on audits of hospitals and nursing homes to speed payments due these providers. Improved economic conditions, reflected in lower unemployment rates and slower increases in the cost of medical services, also have contributed to a moderation in the growth of Medicaid expenditures. Between 1993 and 1995, most states experienced a drop in their unemployment rates—some by roughly 2 percentage points. As we reported earlier, every percentage-point drop in the unemployment rate is typically associated with a 6-percent drop in Medicaid spending. States told us that low unemployment rates had lowered the number of people on welfare and, therefore, in Medicaid. In addition, growth in medical service prices has steadily been declining since the late 1980s. In 1990, the growth in the price of medical services was 9.0 percent; by 1995, it was cut in half to 4.5 percent. In 1996, it declined further to 3.5 percent. Declines in price inflation have an indirect effect on the Medicaid rates that states set for providers. Officials of several of the states we spoke with reported freezing provider payment rates in recent years, including rates for nursing facilities and hospitals. Such a freeze might not have been possible in periods with higher inflation because institutional providers might challenge state payment rates in court, arguing they had not kept pace with inflation. With inflation down, states can restrain payment rates with less concern about such challenges. uncertain because of state variations in program scope and objectives. States also mentioned initiatives to use alternative service delivery methods for long-term care. While these initiatives may have helped to bring Medicaid costs down, measuring their effect is difficult. Although some states have been using managed care to serve portions of their Medicaid population for over 20 years, many of the states’ programs have been voluntary and limited to certain geographic areas. In addition, these programs tend to target women and children rather than populations that may need more care and are more expensive to serve—such as people with disabilities and the elderly. Only a few states have mandated enrollment statewide—fewer still have enrolled more expensive populations—and these programs are relatively new. Arizona, which has the most mature statewide mandatory program, has perhaps best proven the ability to realize cost savings in managed care, cost savings it achieved by devoting significant resources to its competitive bidding process.However, other states have emphasized objectives besides cost control in moving to managed care. In recently expanding its managed care program, Oregon chose to increase per capita payments to promote improved quality and access and to look to the future for any cost savings. Officials from Minnesota, which has a mature managed care program, and California, which is in the midst of a large expansion, told us that managed care has had no significant effect on the moderate decreases they experienced. Given the varying objectives, the ability of managed care to help control state Medicaid costs and moderate spending growth over time is unclear. options as alternatives to nursing facilities. Our previous work showed that such strategies can work toward controlling long-term care spending if controls on the volume of nursing home care and home- and community-based services, such as limiting the number of participating beneficiaries and having waiting lists, are in place. Many of the factors that resulted in the 3.3-percent growth rate in 1996—such as DSH payments, unemployment rates, and program policy changes—will continue to influence the Medicaid growth rate in future years. However, there are indications that some of these components may contribute to higher—not lower—growth rates, while the effect of others is more uncertain. Without new limits, DSH payments can be expected to add to the growth of the overall program. While Louisiana’s adjustments to its DSH payments resulted in a substantial reduction in its 1996 spending, other states’ DSH spending began to grow moderately in 1995 as freezes imposed on additional DSH spending no longer applied. Although DSH payments are not increasing as fast as they were in the early 1990s, these payments did grow 12.4 percent in 1995. Even though the economy has been in a prolonged expansion, history indicates that the current robust economy will not last indefinitely. The unemployment rate cannot be expected to stay as low as it currently is, especially in states with rates below 4 percent. Furthermore, any increases in medical care price inflation will undoubtedly influence Medicaid reimbursement rates, especially to institutional providers. While states have experienced some success in dealing with long-term care costs, the continued increase in the number of elderly people will inevitably lead to an increase in program costs. Alternative service delivery systems can moderate that growth but not eliminate it. services—may decrease, since some Medicaid-eligible people may be discouraged from seeking eligibility and enrollment apart from the new welfare process. However, states may need to restructure their eligibility and enrollment systems to ensure that people who are eligible for Medicaid continue to participate in the program. Restructuring their systems will undoubtedly increase states’ administrative costs. The net effect of these changes remains to be seen. The potential for cost savings through managed care also remains unclear, as experience is limited and state objectives in switching to managed care have not always emphasized immediate cost-containment. Yet it is hoped that managed care will, over time, help constrain costs. While Arizona’s Medicaid managed care program has been effective, cost savings were due primarily to considerable effort to promote competition among health plans. The challenge is whether the state can sustain this competition in the future. To help control Medicaid spending and increase state flexibility, the administration’s 1998 budget proposal includes three initiatives: (1) imposing additional controls over DSH payments, (2) implementing a per capita cap policy, and (3) eliminating waiver requirements and the Boren Amendment. Through the implementation of these and other initiatives, the administration’s proposal projects a net saving in federal Medicaid spending of $9 billion over 5 years. As previously mentioned, in 1995 DSH payments began to grow moderately as states began to reach their federal allotments. The Congressional Budget Office’s Medicaid baseline estimates the federal share of DSH payments over the next 5 years will increase from $10.3 billion in 1998 to $13.6 billion in 2002. The administration’s proposal would cap federal spending on DSH at $10 billion in 1998, $9 billion in 1999, and $8 billion in 2000 and thereafter. To achieve the projected savings, the administration’s proposal would limit federal DSH payments for 1998 in each state to the state’s 1995 level. In subsequent years, the national limit is lowered, and the reduction is distributed across states by taking an equal percentage reduction of all or some of each state’s 1995 DSH payments. In states where DSH payments in 1995 exceeded 12 percent of total Medicaid expenditures, the percentage reduction would only apply to the amounts at or under the 12-percent limit. This limit on reductions would affect 16 states that in 1995 had DSH payments in excess of 12 percent of their total Medicaid expenditures. In the past we reported on states using creative mechanisms to increase their federal Medicaid dollars, specifically through DSH, provider-specific taxes and voluntary contributions, and intergovernmental transfers.Legislation in 1991 and 1993 went a long way toward controlling DSH payments and provider taxes and voluntary contributions. In particular, the 1991 legislation froze DSH payments for “high-DSH” states—those whose DSH expenditures exceeded 12 percent of Medicaid expenditures—because of concerns that these high levels included inappropriate efforts to increase federal matching funds. The administration’s proposal would provide some protection for high-DSH states at the expense of low-DSH states that have kept their share of program spending on DSH below the congressionally specified target level. The administration’s proposed per capita cap aims at more certain control over federal Medicaid spending but does not address concerns about the distribution of federal funding resulting from the current matching formula. The administration’s proposal defines a per capita cap policy that would limit federal Medicaid spending on a per beneficiary basis. As Medicaid enrollment increases in a particular state, so would the federal dollars available to the state. The per capita cap would be set using 1996 as the base year—including both medical and administrative expenditures. The proposal would use an index based on nominal gross domestic product (GDP) per capita plus an adjustment factor to account for Medicaid’s high utilization and intensity of services provided. This index and the number of people eligible for Medicaid in a particular year would be applied to the total 1996 expenditures to determine a state’s per capita limit of federal dollars. Savings expected from this proposal will depend on restraining the growth in spending per beneficiary to about 5 percent a year over the 5-year period. capacity. In addition, current law guarantees that no state will have to pay more than half of the total costs of its Medicaid program, meaning states with higher income receive a higher federal share than they otherwise would. This has contributed to disparities among states in coverage of population groups and services as well as in federal funding. The administration’s proposal would not address these disparities. To the extent there is congressional interest in lessening them, we have previously indicated that any distribution formula should include (1) better and more direct measures than per capita income for both the incidence of poverty and states’ ability to finance program benefits, (2) adjustors for geographic differences in the cost of health care, and (3) a reduced guaranteed federal minimum match. In regard to state flexibility, the administration has proposed changes in three areas: managed care programs, long-term care programs, and the Boren Amendment. Currently, states must obtain waivers of certain federal statutory requirements in order to implement large-scale managed care programs and to provide home- and community-based services as alternatives to nursing facility care. The administration has proposed eliminating the need for a waiver for such programs. In addition, the Boren Amendment, which places certain requirements on how states can set reimbursement rates for hospitals and nursing facilities, would be repealed. Medicaid’s restrictions on states’ use of managed care reflect historical concerns over access and quality. For example, the so-called 75/25 rule that stipulates that, to serve Medicaid beneficiaries, at least 25 percent of a health plan’s total enrollment must consist of private paying patients, was intended as a proxy for quality because private patients presumably have a choice of health plans and can vote with their feet. A second provision, allowing Medicaid beneficiaries to terminate enrollment in a health plan at almost any time, aims to provide them with a similar capacity to express dissatisfaction over the provision of care. The administration’s proposal would replace these requirements with enhanced quality monitoring systems. states’ cost-containment efforts. However, the experience of states with Medicaid managed care programs underscores the importance of adequate planning and appropriate quality assurance systems. If states are granted more direct control to aggressively pursue managed care strategies, the importance of continuous oversight of managed care systems to protect both Medicaid beneficiaries from inappropriate denial of care and federal dollars from payment abuses should not be overlooked. We have also reported on the successful use by states of home- and community-based care services as an alternative to nursing facilities. States we contacted in the course of this work have expanded the use of such services as part of a strategy to help control rapidly increasing Medicaid expenditures for institutional care. States have told us that when implementing these programs, they value the control they have under a waiver but not under the regular program over the amount of home- and community-based services provided. They indicated this control allows them to serve the population in need within budgetary constraints. Despite the limitations in program size, these programs have allowed states to serve more people with the dollars available. Originally, the Boren Amendment was intended to provide states with greater latitude in setting hospital and nursing facility reimbursement rates while ensuring rates were adequate to provide needed services. Over time, however, states believe court decisions have made the Boren Amendment burdensome to states and affected their ability to set reimbursement rates. The uncertainty created by the language of the Boren Amendment is potentially preventing states from controlling rates of payment to institutional providers in ways that compromise neither access nor quality. While some clarification of the Boren Amendment to address state concerns is needed, its original goals are still valid. Mr. Chairman, this concludes my statement. I would be happy to answer any questions you or members of the Subcommittee might have at this time. Thank you. For more information on this testimony, please call Kathryn G. Allen, Assistant Director, on (202) 512-7059. Other major contributors included Lourdes R. Cho, Richard N. Jensen, Deborah A. Signer, and Karen M. Sloan. GAO developed a growth stability index that shows the direction and magnitude of change in the growth rates of federal outlays between fiscal years 1995 and 1996. An index of 1.0 indicates no change in the growth rates for the 2 years. An index greater than 1.0 indicates a decrease in the 1995-96 growth rates. For example, Colorado’s index of 1.37 ranks it as having the largest decrease. States and District of Columbia (continued) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed recent Medicaid spending trends and their potential implications for future outlays, focusing on: (1) key factors that explain the Medicaid 3.3-percent growth rate in fiscal year 1996; and (2) the administration's proposal to contain Medicaid cost growth through decreases in disproportionate share hospital (DSH) payments and per capita caps, and to increase state flexibility. GAO noted that: (1) GAO found no single pattern across all states that accounts for the recent dramatic decrease in the growth of Medicaid spending; (2) rather, a combination of factors, some affecting only certain states and others common to many states, explains the low 1996 growth rate; (3) leading factors include continued reductions in DSH payments in some states as a result of earlier federal restrictions on the amount of such payments and the leveling off of Medicaid enrollment in other states following planned expansions in prior years; (4) a number of states GAO contacted attributed the lower growth rate to a generally improved economy and state initiatives to limit expenditure growth through programmatic changes, such as managed care programs and long-term care alternatives; (5) while the magnitude of the effect of these programmatic changes is less clear, there is evidence that they helped to restrain program costs; (6) it is likely that the 3.3-percent growth rate is not indicative of the growth rate in the years ahead; (7) just as a number of factors converged to bring about the drop in the 1996 growth rate, so a variety of factors, such as a downturn in the economy, could result in increased growth rates in subsequent years; (8) the administration's proposal for Medicaid reform would further control spending by reducing DSH expenditures and imposing a per capita cap, while providing the states greater flexibility in program policy and administration for their managed care and long-term care programs; (9) these initiatives should produce cost savings; and (10) however, in controlling program spending, attention should be given to targeting federal funds appropriately and ensuring that added program flexibility is accompanied by effective federal monitoring and oversight.